Spelling suggestions: "subject:"constraint programming."" "subject:"onstraint programming.""
31 
On safe tractable approximations of chanceconstrained linear matrix inequalities with partly dependent perturbations.January 2011 (has links)
Cheung, Sin Shuen. / Thesis (M.Phil.)Chinese University of Hong Kong, 2011. / Includes bibliographical references (leaves 5559). / Abstracts in English and Chinese. / Abstract  p.i / Acknowledgement  p.iii / Chapter 1  Introduction  p.1 / Chapter 2  Preliminaries  p.5 / Chapter 2.1  Heuristics by Hoeffding and Janson  p.5 / Chapter 2.2  Facts in Matrix Theory  p.10 / Chapter 2.3  Facts in Probability  p.11 / Chapter 3  General ChanceConstrained LMIs  p.18 / Chapter 3.1  Probability Inequalities  p.18 / Chapter 3.2  Safe Tractable Approximations  p.22 / Chapter 4  ChanceConstrained Quadratically Perturbed LMIs  p.27 / Chapter 4.1  Exact Proper Fractional Covers  p.27 / Chapter 4.2  Bounding the Matrix Moment Generating Functions  p.32 / Chapter 5  Computational Study  p.39 / Chapter 5.1  Update Procedures for Safe Tractable Approximations  p.39 / Chapter 5.2  A Numerical Example and Comparisons  p.44 / Chapter 6  Conclusion  p.54 / Bibliography  p.55

32 
Integration of constraint programming and linear programming techniques for constraint satisfaction problem and general constrained optimization problem.January 2001 (has links)
Wong Siu Ham. / Thesis (M.Phil.)Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 131138). / Abstracts in English and Chinese. / Abstract  p.ii / Acknowledgments  p.vi / Chapter 1  Introduction  p.1 / Chapter 1.1  Motivation for Integration  p.2 / Chapter 1.2  Thesis Overview  p.4 / Chapter 2  Preliminaries  p.5 / Chapter 2.1  Constraint Programming  p.5 / Chapter 2.1.1  Constraint Satisfaction Problems (CSP's)  p.6 / Chapter 2.1.2  Satisfiability (SAT) Problems  p.10 / Chapter 2.1.3  Systematic Search  p.11 / Chapter 2.1.4  Local Search  p.13 / Chapter 2.2  Linear Programming  p.17 / Chapter 2.2.1  Linear Programming Problems  p.17 / Chapter 2.2.2  Simplex Method  p.19 / Chapter 2.2.3  Mixed Integer Programming Problems  p.27 / Chapter 3  Integration of Constraint Programming and Linear Program ming  p.29 / Chapter 3.1  Problem Definition  p.29 / Chapter 3.2  Related works  p.30 / Chapter 3.2.1  Illustrating the Performances  p.30 / Chapter 3.2.2  Improving the Searching  p.33 / Chapter 3.2.3  Improving the representation  p.36 / Chapter 4  A Scheme of Integration for Solving Constraint Satisfaction Prob lem  p.37 / Chapter 4.1  Integrated Algorithm  p.38 / Chapter 4.1.1  Overview of the Integrated Solver  p.38 / Chapter 4.1.2  The LP Engine  p.44 / Chapter 4.1.3  The CP Solver  p.45 / Chapter 4.1.4  Proof of Soundness and Completeness  p.46 / Chapter 4.1.5  Compared with Previous Work  p.46 / Chapter 4.2  Benchmarking Results  p.48 / Chapter 4.2.1  Comparison with CLP solvers  p.48 / Chapter 4.2.2  Magic Squares  p.51 / Chapter 4.2.3  Random CSP's  p.52 / Chapter 5  A Scheme of Integration for Solving General Constrained Opti mization Problem  p.68 / Chapter 5.1  Integrated Optimization Algorithm  p.69 / Chapter 5.1.1  Overview of the Integrated Optimizer  p.69 / Chapter 5.1.2  The CP Solver  p.74 / Chapter 5.1.3  The LP Engine  p.75 / Chapter 5.1.4  Proof of the Optimization  p.77 / Chapter 5.2  Benchmarking Results  p.77 / Chapter 5.2.1  Weighted Magic Square  p.77 / Chapter 5.2.2  Template design problem  p.78 / Chapter 5.2.3  Random GCOP's  p.79 / Chapter 6  Conclusions and Future Work  p.97 / Chapter 6.1  Conclusions  p.97 / Chapter 6.2  Future work  p.98 / Chapter 6.2.1  Detection of implicit equalities  p.98 / Chapter 6.2.2  Dynamical variable selection  p.99 / Chapter 6.2.3  Analysis on help of linear constraints  p.99 / Chapter 6.2.4  Local Search and Linear Programming  p.99 / Appendix  p.101 / Proof of Soundness and Completeness  p.101 / Proof of the optimization  p.126 / Bibliography  p.130

33 
An integer programming approach for the satisfiability problems.January 2001 (has links)
by Lui Oi Lun Irene. / Thesis (M.Phil.)Chinese University of Hong Kong, 2001. / Includes bibliographical references (leaves 128132). / Abstracts in English and Chinese. / List of Figures  p.vii / List of Tables  p.viii / Chapter 1  Introduction  p.1 / Chapter 1.1  Satisfiability Problem  p.1 / Chapter 1.2  Motivation of the Research  p.1 / Chapter 1.3  Overview of the Thesis  p.2 / Chapter 2  Constraint Satisfaction Problem and Satisfiability Problem  p.4 / Chapter 2.1  Constraint Programming  p.4 / Chapter 2.2  Satisfiability Problem  p.6 / Chapter 2.3  Methods in Solving SAT problem  p.7 / Chapter 2.3.1  DavisPutnamLoveland Procedure  p.7 / Chapter 2.3.2  SATZ by ChuMin Li  p.8 / Chapter 2.3.3  Local Search for SAT  p.11 / Chapter 2.3.4  Integer Linear Programming Method for SAT  p.12 / Chapter 2.3.5  Semidefinite Programming Method  p.13 / Chapter 2.4  Softwares for SAT  p.15 / Chapter 2.4.1  SAT01  p.15 / Chapter 2.4.2  "SATZ and SATZ213, contributed by ChuMin Li"  p.15 / Chapter 2.4.3  Others  p.15 / Chapter 3  Integer Programming  p.17 / Chapter 3.1  Introduction  p.17 / Chapter 3.1.1  Formulation of IPs and BIPs  p.18 / Chapter 3.1.2  Binary Search Tree  p.19 / Chapter 3.2  Methods in Solving IP problem  p.19 / Chapter 3.2.1  BranchandBound Method  p.20 / Chapter 3.2.2  CuttingPlane Method  p.23 / Chapter 3.2.3  Duality in Integer Programming  p.26 / Chapter 3.2.4  Heuristic Algorithm  p.28 / Chapter 3.3  Zeroone Optimization and Continuous Relaxation  p.29 / Chapter 3.3.1  Introduction  p.29 / Chapter 3.3.2  The Roof Dual expressed in terms of Lagrangian Relaxation  p.30 / Chapter 3.3.3  Determining the Existence of a Duality Gap  p.31 / Chapter 3.4  Software for solving Integer Programs  p.33 / Chapter 4  Integer Programming Formulation for SAT Problem  p.35 / Chapter 4.1  From 3CNF SAT Clauses to ZeroOne IP Constraints  p.35 / Chapter 4.2  From mConstrained IP Problem to SinglyConstrained IP Problem  p.38 / Chapter 4.2.1  Example  p.39 / Chapter 5  A Basic BranchandBound Algorithm for the ZeroOne Polynomial Maximization Problem  p.42 / Chapter 5.1  Reason for choosing BranchandBound Method  p.42 / Chapter 5.2  Searching Algorithm  p.43 / Chapter 5.2.1  Branch Rule  p.44 / Chapter 5.2.2  Bounding Rule  p.46 / Chapter 5.2.3  Fathoming Test  p.46 / Chapter 5.2.4  Example  p.47 / Chapter 6  Revised Bound Rule for BranchandBound Algorithm  p.55 / Chapter 6.1  Revised Bound Rule  p.55 / Chapter 6.1.1  CPLEX  p.57 / Chapter 6.2  Example  p.57 / Chapter 6.3  Conclusion  p.65 / Chapter 7  Revised Branch Rule for BranchandBound Algorithm  p.67 / Chapter 7.1  Revised Branch Rule  p.67 / Chapter 7.2  Comparison between Branch Rule and Revised Branch Rule  p.69 / Chapter 7.3  Example  p.72 / Chapter 7.4  Conclusion  p.73 / Chapter 8  Experimental Results and Analysis  p.80 / Chapter 8.1  Experimental Results  p.80 / Chapter 8.2  Statistical Analysis  p.33 / Chapter 8.2.1  Analysis of Search Techniques  p.83 / Chapter 8.2.2  Discussion of the Performance of SATZ  p.85 / Chapter 9  Concluding Remarks  p.87 / Chapter 9.1  Conclusion  p.87 / Chapter 9.2  Suggestions for Future Research  p.88 / Chapter A  Searching Procedures for Solving Constraint Satisfaction Problem (CSP)  p.91 / Chapter A.1  Notation  p.91 / Chapter A.2  Procedures for Solving CSP  p.92 / Chapter A.2.1  Generate and Test  p.92 / Chapter A.2.2  Standard Backtracking  p.93 / Chapter A.2.3  Forward Checking  p.94 / Chapter A.2.4  Looking Ahead  p.95 / Chapter B  Complete Results for Experiments  p.96 / Chapter B.1  Complete Result for SATZ  p.96 / Chapter B.1.1  n =5  p.95 / Chapter B.1.2  n = 10  p.98 / Chapter B.1.3  n = 30  p.99 / Chapter B.2  Complete Result for Basic BranchandBound Algorithm  p.101 / Chapter B.2.1  n二5  p.101 / Chapter B.2.2  n = 10  p.104 / Chapter B.2.3  n = 30  p.107 / Chapter B.3  Complete Result for Revised Bound Rule  p.109 / Chapter B.3.1  n = 5  p.109 / Chapter B.3.2  n = 10  p.112 / Chapter B.3.3  n = 30  p.115 / Chapter B.4  Complete Result for Revised BranchandBound Algorithm  p.118 / Chapter B.4.1  n = 5  p.118 / Chapter B.4.2  n = 10  p.121 / Chapter B.4.3  n = 30  p.124 / Bibliography  p.128

34 
Electra : integrating constraints, conditionbased dispatching, and features exclusion into the multiparadigm language LedaZamel, Nabil M. 06 December 1994 (has links)
Multiparadigm languages are languages that are designed to support more than
one style of programming. Leda is a stronglytyped multiparadigm programming
language that supports imperative, functional, objectoriented, and logic programming.
The constraint programming paradigm is a declarative style of programming
where the programmer is able to state relationships among some entities and expect
the system to maintain the validity of these relationships throughout program execution.
The system accomplishes this either by invoking userdefined fixes that impose
rigid rules governing the evolution of the entities, or by finding suitable values to
be assigned to the constrained entities without violating any active constraint. Constraints,
due to their declarative semantics, are suitable for the direct mapping of the
characteristics of a number of mechanisms including: consistency checks, constraintdirected
search, and constraintenforced reevaluation, among others. This makes
constraint languages the most appropriate languages for the implementation of a
large number of applications such as scheduling, planning, resource allocation, simulation,
and graphical user interfaces.
The semantics of constraints cannot be easily emulated by other constructs
in the paradigms that are offered by the language Leda. However, the constraint
paradigm does not provide any general control constructs. The lack of general control
constructs impedes this paradigm's ability to naturally express a large number
of problems. This dissertation presents the language Electra, which integrates the
constraint paradigm into the language Leda by creating a unified construct that
provides the ability to express the conventional semantics of constraints with some
extensions. Due to the flexibility of this construct, the programmer is given the
choice of either stating how a constraint is to be satisfied or delegating that task to
the constraintsatisfier. The concept of providing the programmer with the ability to
express systemmaintained relations, which is the basic characteristic of constraints,
provided a motivation for enhancing other paradigms with similar abilities. The
functional paradigm is extended by adding to it the mechanism of conditionbased
dispatching which is similar to argument patternmatching. The objectoriented
paradigm is extended by allowing feature exclusion which is a form of inheritance
exception. This dissertation claims that the integration provided by the language
Electra will enable Leda programmers to reap the benefits of the paradigm of constraints
while overcoming its limitations. / Graduation date: 1995

35 
On the NearOptimality of List Scheduling Heuristics for Local and Global Instruction SchedulingChase, Michael January 2006 (has links)
Modern architectures allow multiple instructions to be issued at once and have other complex features. To account for this, compilers perform instruction scheduling after generating the output code. The instruction scheduling problem is to find an optimal schedule given the limitations and capabilities of the architecture. While this can be done optimally, a greedy algorithm known as list scheduling is used in practice in most production compilers. <br /><br /> List scheduling is generally regarded as being nearoptimal in practice, provided a good choice of heuristic is used. However, previous work comparing a list scheduler against an optimal scheduler either makes the assumption that an idealized architectural model is being used or uses too few test cases to strongly prove or disprove the assumed nearoptimality of list scheduling. It remains an open question whether or not list scheduling performs well when scheduling for a realistic architectural model. <br /><br /> Using constraint programming, we developed an efficient optimal scheduler capable of scheduling even very large blocks within a popular benchmark suite in a reasonable amount of time. I improved the architectural model and optimal scheduler by allowing for an issue width not equal to the number of functional units, instructions that monopolize the processor for one cycle, and nonfully pipelined instructions. I then evaluated the performance of list scheduling for this more realistic architectural model. <br /><br /> I found that when scheduling for basic blocks when using a realistic architectural model, only 6% or less of schedules produced by a list scheduler are nonoptimal, but when scheduling for superblocks, at least 40% of schedules produced by a list scheduler are nonoptimal. Furthermore, when the list scheduler and optimal scheduler differed, the optimal scheduler was able to improve schedule cost by at least 5% on average, realizing maximum improvements of 82%. This suggests that list scheduling is only a viable solution in practice when scheduling basic blocks. When scheduling superblocks, the advantage of using a list scheduler is its speed, not the quality of schedules produced, and other alternatives to list scheduling should be considered.

36 
Constraint Programming Techniques for Optimal Instruction SchedulingMalik, Abid 03 1900 (has links)
Modern processors have multiple pipelined functional units
and can issue more than one instruction per clock cycle. This
puts great pressure on the instruction scheduling phase in
a compiler to expose maximum instruction level parallelism.
Basic blocks and superblocks are commonly used regions of code
in a program for instruction scheduling. Instruction scheduling
coupled with register allocation is also a well studied problem
to produce better machine code. Scheduling basic blocks and
superblocks optimally with or with out register allocation is
NPcomplete, and is done suboptimally in production compilers
using heuristic approaches. In this thesis, I present a
constraint programming approach to the superblock and basic
block instruction scheduling problems for both idealized and
realistic architectures. Basic block scheduling with register
allocation with no spilling allowed is also considered. My
models for both basic block and superblock scheduling are
optimal and fast enough to be incorporated into production
compilers. I experimentally evaluated my optimal schedulers on
the SPEC 2000 integer and floating point benchmarks. On this
benchmark suite, the optimal schedulers were very robust and
scaled to the largest basic blocks and superblocks. Depending
on the architectural model, between 99.991\% to 99.999\% of all
basic blocks and superblocks were solved to optimality. The
schedulers were able to routinely solve the largest blocks,
including blocks with up to 2600 instructions. My results compare
favorably to the best previous optimal approaches, which are based
on integer programming and enumeration.
My approach for basic block scheduling without allowing spilling was good
enough to solve 97.496\% of all basic blocks in the SPEC 2000
benchmark. The approach was able to solve basic blocks as
large as 50 instructions for both idealized and realistic
architectures within reasonable time limits.
Again, my results compare favorably to recent work
on optimal integrated code generation, which is
based on integer programming.

37 
Combinatorial Problems in Compiler OptimizationBeg, Mirza Omer 08 April 2013 (has links)
Several important compiler optimizations such as instruction scheduling
and register allocation are fundamentally hard and are usually solved using heuristics
or approximate solutions.
In contrast, this thesis examines optimal solutions to three combinatorial problems in compiler optimization.
The first problem addresses instruction scheduling for clustered
architectures, popular in embedded systems. Given a set of
instructions the optimal solution gives the best possible schedule for a given clustered
architectural model. The problem is solved
using a decomposition technique applied to constraint programming which determines the spatial and
temporal schedule using an integrated approach. The experiments
show that our solver can tradeoff some compile time efficiency to solve most instances in
standard benchmarks giving significant performance improvements.
The second problem addresses
instruction selection in the compiler code generation phase.
Given the intermediate representation of code the optimal solution
determines the sequence of equivalent machine instructions as it optimizes for code size.
This thesis shows that a large number of benchmark instances can be solved optimally
using constraint programming techniques.
The third problem addressed is the placement of data in memory for efficient
cache utilization.
Using the data access patterns of a given program, our algorithm
determines a placement to reorganize data in
memory which would result in fewer cache misses.
By focusing on graph theoretic placement techniques it is
shown that there exist, in special cases, efficient and optimal algorithms for
data placement that significantly
improve cache utilization. We also propose heuristic solutions for solving larger instances
for which provably optimal solutions cannot be determined using polynomial time algorithms.
We demonstrate that cache hit rates can
be significantly improved by using profiling techniques over a wide range of benchmarks and cache configurations.

38 
On the NearOptimality of List Scheduling Heuristics for Local and Global Instruction SchedulingChase, Michael January 2006 (has links)
Modern architectures allow multiple instructions to be issued at once and have other complex features. To account for this, compilers perform instruction scheduling after generating the output code. The instruction scheduling problem is to find an optimal schedule given the limitations and capabilities of the architecture. While this can be done optimally, a greedy algorithm known as list scheduling is used in practice in most production compilers. <br /><br /> List scheduling is generally regarded as being nearoptimal in practice, provided a good choice of heuristic is used. However, previous work comparing a list scheduler against an optimal scheduler either makes the assumption that an idealized architectural model is being used or uses too few test cases to strongly prove or disprove the assumed nearoptimality of list scheduling. It remains an open question whether or not list scheduling performs well when scheduling for a realistic architectural model. <br /><br /> Using constraint programming, we developed an efficient optimal scheduler capable of scheduling even very large blocks within a popular benchmark suite in a reasonable amount of time. I improved the architectural model and optimal scheduler by allowing for an issue width not equal to the number of functional units, instructions that monopolize the processor for one cycle, and nonfully pipelined instructions. I then evaluated the performance of list scheduling for this more realistic architectural model. <br /><br /> I found that when scheduling for basic blocks when using a realistic architectural model, only 6% or less of schedules produced by a list scheduler are nonoptimal, but when scheduling for superblocks, at least 40% of schedules produced by a list scheduler are nonoptimal. Furthermore, when the list scheduler and optimal scheduler differed, the optimal scheduler was able to improve schedule cost by at least 5% on average, realizing maximum improvements of 82%. This suggests that list scheduling is only a viable solution in practice when scheduling basic blocks. When scheduling superblocks, the advantage of using a list scheduler is its speed, not the quality of schedules produced, and other alternatives to list scheduling should be considered.

39 
Constraint Programming Techniques for Optimal Instruction SchedulingMalik, Abid 03 1900 (has links)
Modern processors have multiple pipelined functional units
and can issue more than one instruction per clock cycle. This
puts great pressure on the instruction scheduling phase in
a compiler to expose maximum instruction level parallelism.
Basic blocks and superblocks are commonly used regions of code
in a program for instruction scheduling. Instruction scheduling
coupled with register allocation is also a well studied problem
to produce better machine code. Scheduling basic blocks and
superblocks optimally with or with out register allocation is
NPcomplete, and is done suboptimally in production compilers
using heuristic approaches. In this thesis, I present a
constraint programming approach to the superblock and basic
block instruction scheduling problems for both idealized and
realistic architectures. Basic block scheduling with register
allocation with no spilling allowed is also considered. My
models for both basic block and superblock scheduling are
optimal and fast enough to be incorporated into production
compilers. I experimentally evaluated my optimal schedulers on
the SPEC 2000 integer and floating point benchmarks. On this
benchmark suite, the optimal schedulers were very robust and
scaled to the largest basic blocks and superblocks. Depending
on the architectural model, between 99.991\% to 99.999\% of all
basic blocks and superblocks were solved to optimality. The
schedulers were able to routinely solve the largest blocks,
including blocks with up to 2600 instructions. My results compare
favorably to the best previous optimal approaches, which are based
on integer programming and enumeration.
My approach for basic block scheduling without allowing spilling was good
enough to solve 97.496\% of all basic blocks in the SPEC 2000
benchmark. The approach was able to solve basic blocks as
large as 50 instructions for both idealized and realistic
architectures within reasonable time limits.
Again, my results compare favorably to recent work
on optimal integrated code generation, which is
based on integer programming.

40 
Techniques for Efficient Constraint PropagationLagerkvist, Mikael Zayenz January 2008 (has links)
<p>This thesis explores three new techniques for increasing the efficiency of constraint propagation: support for incremental propagation, improved representation of constraints, and abstractions to simplify propagation. Support for incremental propagation is added to a propagator centered propagation system by adding a new intermediate layer of abstraction, advisors, that capture the essential aspects of a variable centered system. Advisors are used to give propagators a detailed view of the dynamic changes between propagator runs. Advisors enable the implementation of optimal algorithms for important constraints such as extensional constraints and Boolean linear inequations, which is not possible in a propagator centered system lacking advisors. Using Multivalued Decision Diagrams (MDD) as the representation for extensional constraints is shown to be useful for several reasons. Classical operations on MDDs can be used to optimize the representation, and thus speeding up the propagation. In particular, the reduction operation is stronger than the use of DFA minimization for the regular constraint. The use of MDDs is contrasted and compared to a recent proposal where tables are compressed. Abstractions for constraint programs try to capture small and essential features of a model. These features may be much cheaper to propagate than the unabstracted program. The potential for abstraction is explored using several examples. These three techniques work on different levels. Support for incremental propagation is essential for the efficient implementation of some constraints, so that the algorithms have the right complexity. On a higher level, the question of representation looks at what a propagator should use for propagation. Finally, the question of abstraction can potentially look at several propagators, to find cases where abstractions might be fruitful. An essential feature of this thesis is a novel model for general placement constraints that uses regular expressions. The model is very versatile and can be used for several different kinds of placement problems. The model applied to the classic pentominoes puzzle will be used throughout the thesis as an example and for experiments.</p><p> </p> / <p>Den här avhandlingen utforskar tre nya tekniker för att öka effektiviteten av villkorspropagering: stöd för inkrementell propagering, val av representation för villkor, samt abstraktion för att förenkla propagering. Ett propageringssystem organiserat efter propagerare utökas med stöd för inkrementell propagering genom att lägga till ett nytt abstraktionslager: rådgivare. Detta lager fångar de essentiella aspekterna hos system organiserade efter variabler. Rådgivare används för att ge propagerare detaljerad information om de dynamiska ändringarna i variabler mellan körningar av propageraren. Utökningen innebär att det går att implementera optimala algoritmer för vissa viktiga villkor såsom tabellvillkor och Boolska linjära olikheter, något som inte är möjligt i ett rent propagatororganiserat system. Användandet av så kallade <em>Multivalued Decision Diagram</em> (MDD) som representation för tabellvillkor visas vara användbart i flera avseenden. Klassiska MDDoperationer kan användas för att optimera representationen, vilket leder till snabbare propagering. Specifikt så är reduktionsoperationen kraftfullare än användandet av DFAminimering för reguljära villkor. MDDrepresentationen jämförs också med ett nyligen framlagt förslag för komprimerade tabeller. Abstraktioner för villkorsprogram försöker fånga små men viktiga egenskaper i modeller. Sådana egenskaper kan vara mycket enklare att propagera än den konkreta modellen. Potentialen för abstraktioner undersöks för några exempel. Dessa tre tekniker fungerar på olika nivåer. Stöd för inkrementell propagering är nödvändigt för att kunna implementera vissa villkor effektivt med rätt komplexitet. Valet av representation för villkor är på en högre nivå, då det gäller att se vilka algoritmer som skall användas för ett villkor. Slutligen så måste flera villkor i en modell studeras för att finna rätt typ av abstraktioner. Ett utmärkande drag för den här avhandlingen är en ny modell för generella placeringsvillkor som använder reguljära uttryck. Modellen är mångsidig och kan användas för flera olika typer av placeringsproblem. Modellen specialiserad för pentominopussel används genomgående som exempel för experiment.</p><p> </p> / Coordinating Constraint Propagation

Page generated in 0.2133 seconds