91 |
Proof planning for logic program synthesisKraan, H. C. January 1994 (has links)
The area of logic program synthesis is attracting increased interest. Most efforts have concentrated on applying techniques from functional program synthesis to logic program synthesis. This thesis investigates a new approach: Synthesizing logic programs automatically via middle-out reasoning in proof planning. [Bundy <I>et al </I>90a] suggested middle-out reasoning in proof planning. Middle-out reasoning uses variables to represent unknown details of a proof. Unification instantiates the variables in the subsequent planning, while proof planning provides the necessary search control. Middle-out reasoning is used for synthesis by planning the verification of an unknown logic program: The program body is represented with a meta-variable. The planning results both in an instantiation of the program body and a plan for the verification of that program. If the plan executes successfully, the synthesized program is partially correct and complete. Middle-out reasoning is also used to select induction schemes. Finding an appropriate induction scheme in synthesis is difficult, because the recursion in the program, which is unknown at the outset, determines the induction in the proof. In middle-out induction, we set up a schematic step case by representing the constructors applied to the induction variables with meta-variables. Once the step case is complete, the instantiated variables correspond to an induction appropriate to the recursion of the program. The results reported in this thesis are encouraging. The approach has been implemented as an extension of the proof planner CL<SUP>A</SUP>M [Bundy <I>et al</I> 90b], called <I>Periwinkle</I>, which has been used to synthesize a variety of programs fully automatically.
|
92 |
An algebraic approach to syntax, semantics and compilationStephenson, K. January 1996 (has links)
In this thesis, we develop an algebraic strategy and tools for modelling the syntax and semantics of programming languages and for proving the correctness of the process of compiling one language into another. Our first step in algebraically specifying language syntax is to apply a variation of an established technique of transforming a context-free grammar into a closed term algebra. Next, we design equational definitions of additional functions that act as a filter for the context-sensitive features of a language. To reduce the work involved in devising such specifications, we provide parameterised descriptions of commonly occurring language features. We illustrate the practicability of these modular methods by considering the algebraic specification of a range of programming languages and constructions. We then develop a modular algebraic method for defining operational semantics. The key to this is the employment of a notion of time by means of a simple clock, to enumerate the sequences of states produced by executing a program. We determine this behaviour by generating a sequence of atomic programs, such that the execution of each atomic program provides the next state in the execution sequence. We use functions that decompose the syntax one step at a time to determine which atomic program we should execute at each moment in time to simulate the behaviour of the entire program. We illustrate our technique with a wide-ranging set of examples. Finally, we describe how we can structure the compilation process using hierarchies of algebras, and how we can use equational methods to prove compiler correctness. The basis of our proof is in establishing correctness over just one step of time. We illustrate our technique with a case study of translating a high-level <B>while</B> language into instructions for a low-level register machine.
|
93 |
Particle swarm optimization in stationary and dynamic environmentsLi, Changhe January 2011 (has links)
Inspired by social behavior of bird flocking or fish schooling, Eberhartand Kennedy first developed the particle swarm optimization (PSO) algorithm in 1995. PSO, as a branch of evolutionary computation, has been successfully applied in many research and application areas in the past several years, e.g., global optimization, artificial neural network training, and fuzzy system control, etc… Especially, for global optimization, PSO has shown its superior advantages and effectiveness. Although PSO is an effective tool for global optimization problems, it shows weakness while solving complex problems (e.g., shifted, rotated, and compositional problems) or dynamic problems (e.g., the moving peak problem and the DF1 function). This is especially true for the original PSO algorithm. In order to improve the performance of PSO to solve complex problems, we present a novel algorithm, called self-learning PSO (SLPSO). In SLPSO, each particle has four different learning strategies to deal with different situations in the search space. The cooperation of the four learning strategies is implemented by an adaptive framework at the individual level, which can enable each particle to choose the optimal learning strategy according to the properties of its own local fitness landscape. This flexible learning mechanism is able to automatically balance the behavior of exploration and exploitation for each particle in the entire search space during the whole running process. Another major contribution of this work is to adapt PSO to dynamic environments, we propose an idea that applies hierarchical clustering techniques to generate multiple populations. This idea is the first attempt to solve some open issues when using multiple population methods in dynamic environments, such as, how to define the size of search region of a sub-population, how many individuals are needed in each sub-population, and how many sub-populations are needed, etc. Experimental study has shown that this idea is effective to locate and track multiple peaks in dynamic environments.
|
94 |
Contributions to Quadratic 0 -1 ProgrammingGrainger, Daniel John January 2010 (has links)
No description available.
|
95 |
Sparse canonical correlation analysis using the LassoLykou, Anastasia January 2008 (has links)
No description available.
|
96 |
Curry-Howard Term Calculi for Gentzen-Style Classical LogicsSummers, Alexander J. January 2008 (has links)
No description available.
|
97 |
High Accuracy Methods for the Solution to Two Point Boundray ValueCapper, Steven David January 2008 (has links)
No description available.
|
98 |
Computational power of quantum many-body states and some results on discrete phase spacesGross, David January 2008 (has links)
This thesis consists of two parts. The main part is concerned with new schemes for measurement-based quantum computation. Computers utilizing the laws of quantum mechanics promise an exponential speed-up over purely classical devices. Recently, considerable attention has been paid to the measurement-based paradigm of quantum computers. It has been realized that local measurements on certain highly entangled quantum states are computationally as powerful as the well-established model for quantum computation based on controlled unitary evolution. Prior to this thesis, only one family of quantum states was known to possess this computational power: the so-called cluster state and some very close relatives. Questions posed and answered in this thesis include: Can one find families of states different from the cluster, which constitute universal resources for measurement-based computation? Can the highly singular properties of the cluster state be relaxed while retaining universality? Is the quality of being a computational resource common or rare among pure states? We start by establishing a new mathematical tool for understanding the connection between local measurements on an entangled quantum state and a quantum computation. This framework - based on finitely correlated states (or matrix product states) common in many-body physics - is the first such tool general enough to apply to a wide range of quantum states beyond the family of graph states. We employ it to construct a variety of new universal resource states and schemes for measurement-based computation. It is found that many entanglement properties of universal states may be radically different from those of the cluster: we identify states which are locally arbitrarily close to a pure state, exhibit long-ranged correlations or cannot be converted into cluster states by means of stochastic local operations and classical communication. Flexible schemes for the compensation of the inherent randomness of quantum measurements are introduced. We proceed to provide a complete classification of a natural class of states which can take the role of a single logical qubit in a measurement-based quantum computer. Lastly, it is demonstrated that states can be too entangled to be useful for any computational purpose. Concentration of measure arguments show that this problem occurs for the dramatic majority of all pure states. The second part of the thesis is concerned with discrete quantum phase spaces. We prove that the only pure states to possess a non-negative Wigner function are stabilizer states. The result can be seen as a finite-dimensional analogue of a classic theorem due to Hudson, who showed that Gaussian states play the same role in the setting of continuous variable systems. The quantum phase space techniques developed for this argument are subsequently used to quantize a well-known structure from classical computer science: the Margulis expander.
|
99 |
Decomposition schemes for polynomial optimisation, semidefinite programming and applications to nonconvex portfolio decisionsKleniati, Polyxeni M. January 2010 (has links)
No description available.
|
100 |
Distrubuting entanglement for quantum computingCampbell, Earl T. January 2008 (has links)
No description available.
|
Page generated in 0.0358 seconds