• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Machine learning in compilers

Leather, Hugh January 2011 (has links)
Tuning a compiler so that it produces optimised code is a difficult task because modern processors are complicated; they have a large number of components operating in parallel and each is sensitive to the behaviour of the others. Building analytical models on which optimisation heuristics can be based has become harder as processor complexity increased and this trend is bound to continue as the world moves towards further heterogeneous parallelism. Compiler writers need to spend months to get a heuristic right for any particular architecture and these days compilers often support a wide range of disparate devices. Whenever a new processor comes out, even if derived from a previous one, the compiler’s heuristics will need to be retuned for it. This is, typically, too much effort and so, in fact, most compilers are out of date. Machine learning has been shown to help; by running example programs, compiled in different ways, and observing how those ways effect program run-time, automatic machine learning tools can predict good settings with which to compile new, as yet unseen programs. The field is nascent, but has demonstrated significant results already and promises a day when compilers will be tuned for new hardware without the need for months of compiler experts’ time. Many hurdles still remain, however, and while experts no longer have to worry about the details of heuristic parameters, they must spend their time on the details of the machine learning process instead to get the full benefits of the approach. This thesis aims to remove some of the aspects of machine learning based compilers for which human experts are still required, paving the way for a completely automatic, retuning compiler. First, we tackle the most conspicuous area of human involvement; feature generation. In all previous machine learning works for compilers, the features, which describe the important aspects of each example to the machine learning tools, must be constructed by an expert. Should that expert choose features poorly, they will miss crucial information without which the machine learning algorithm can never excel. We show that not only can we automatically derive good features, but that these features out perform those of human experts. We demonstrate our approach on loop unrolling, and find we do better than previous work, obtaining XXX% of the available performance, more than the XXX% of previous state of the art. Next, we demonstrate a new method to efficiently capture the raw data needed for machine learning tasks. The iterative compilation on which machine learning in compilers depends is typically time consuming, often requiring months of compute time. The underlying processes are also noisy, so that most prior works fall into two categories; those which attempt to gather clean data by executing a large number of times and those which ignore the statistical validity of their data to keep experiment times feasible. Our approach, on the other hand guarantees clean data while adapting to the experiment at hand, needing an order of magnitude less work that prior techniques.
2

Computational logic : structure sharing and proof of program properties

Moore, J. Strother January 1973 (has links)
This thesis describes the results of two studies in computational logic. The first concerns a very efficient method of implementing resolution theorem provers. The second concerns a non-resolution program which automatically proves many theorems about LISP functions, using structural induction. In Part 1, a method of representing clauses, called 'structure sharing'is presented. In this representation, terms are instantiated by binding their variables on a stack, or in a dictionary, and derived clauses are represented in terms of their parents. This allows the structure representing a clause to be used in different contexts without renaming its variables or copying it in any way. The amount of space required for a clause is (2 + n) 36-bit words, where n is the number of components in the unifying substitution made for the resolution or factor. This is independant of the number of literals in the clause and the depth of function nesting. Several ways of making the unification algorithm more efficient are presented. These include a method od preprocessing the input terms so that the unifying substitution for derived terms can be discovered by a recursive look-up proceedure. Techniques for naturally mixing computation and deduction are presented. The structure sharing implementation of SL-resolution is described in detail. The relationship between structure sharing and programming language implementations is discussed. Part 1 concludes with the presentation of a programming language, based on predicate calculus, with structure sharing as the natural implementation. Part 2 of this thesis describes a program which automatically proves a wide variety of theorems about functions written in a subset of pre LISP. Features of this program include: The program is fully automatic, requiring no information from the user except the LISP definitions of the functions involved and the statement of the theorem to be proved. No inductive assertions are required for the user. The program uses structural induction when required, automatically generating its own induction formulas. All relationships in the theorem are expressed in terms of user defined LISP functions, rather than a secong logical language. The system employs no built-in information about any non-primitive function. All properties required of any function involved in a proof are derived and established automatically. The progeam is capable of generalizing some theorems in order to prove them; in doing so, it often generates interesting lemmas. The program can write new, recursive LISP functions automatically in attempting to generalize a theorem. Finally, the program is very fast by theorem proving standards, requiring around 10 seconds per proof.

Page generated in 0.4537 seconds