121 |
An integration of reduction and logic for programming languagesWright, David A January 1988 (has links)
A new declarative language is presented which captures the expressibility of both logic programming languages and functional languages. This is achieved by conditional graph rewriting, with full unification as the parameter passing mechanism. The syntax and semantics are described both formally and informally, and examples are offered to support the expressibility claim made above. The language design is of further interest due to its uniformity and the inclusion of a novel mechanism for type inference in the presence of derived type hierarchies
|
122 |
Multi-period stochastic programmingGassmann, Horand Ingo January 1987 (has links)
This dissertation presents various aspects of the solution of the linear multi-period stochastic programming problem. Under relatively mild assumptions on the structure of the random variables present in the problem, the value function at every time stage is shown to be jointly convex in the history of the process, namely the random variables observed so far as well as the decisions taken up to that point.
Convexity enables the construction of both upper and lower bounds on the value of the entire problem by suitable discretization of the random variables. These bounds are developed in Chapter 2, where it is also demonstrated how the bounds can be made arbitrarily sharp if the discretizations are chosen sufficiently fine. The chapter emphasizes computability of the bounds, but does not concern itself with finding the discretizations themselves.
The practise commonly followed to obtain a discretization of a random variable is to partition its support, usually into rectangular subsets. In order to apply the bounds of Chapter 2, one needs to determine the probability mass and weighted centroid for each element of the partition. This is a hard problem in itself, since in the continuous case it amounts to a multi-dimensional integration. Chapter 3 describes some Monte-Carlo techniques which can be used for normal distributions. These methods require random sampling, and the two main issues addressed are efficiency and accuracy. It turns out that the optimal method to use depends somewhat on the probability mass of the set in question.
Having obtained a suitable discretization, one can then solve the resulting large scale linear program which approximates the original problem. Its constraint matrix is highly structured, and Chapter 4 describes one algorithm which attempts to utilize this structure.
The algorithm uses the Dantzig-Wolfe decomposition principle, nesting decomposition
levels one inside the other. Many of the subproblems generated in the course of this decomposition share the same constraint matrices and can thus be solved simultaneously. Numerical results show that the algorithm may out-perform a linear programming package on some simple problems.
Chapter 5, finally, combines all these ideas and applies them to a problem in forest management. Here it is required to find logging levels in each of several time periods to maximize the expected revenue, computed as the volume cut times an appropriate discount factor. Uncertainty enters into the model in the form of the risk of forest fires and other environmental hazards, which may destroy a fraction of the existing forest. Several discretizations are used to formulate both upper and lower bound approximations to the original problem. / Business, Sauder School of / Graduate
|
123 |
Internal convex programming, orthogonal linear programming, and program generation proceduresRistroph, John Heard 05 January 2010 (has links)
Three topics are developed: interval convex programming, and program generation techniques. The interval convex programming problem is similar to the convex programming problem of the real number system except that all parameters are specified as intervals of real numbers rather than as real scalars. The interval programming solution procedure involves the solution of a series of 2n real valued convex programs where n is the dimension of the space. The solution of an interval programming problem is an interval vector which contains all possible solutions to any real valued convex program which may be realized.
Attempts to improve the efficiency of the interval convex programming problem lead to the eventual development of a new solution procedure for the real valued linear programming problem, Orthogonal linear programming. This new algorithm evolved from some heuristic procedures which were initially examined in the attempt to improve solution efficiency. In the course of testing these heuristics, which were unsuccessful, procedures were developed whereby it is possible to generate discrete and continuous mathematical programs with randomly chosen parameters, but known solutions. / Ph. D.
|
124 |
The mixed-integer bilinear programming problem with extensions to zero-one quadratic programsAdams, Warren Philip January 1985 (has links)
This research effort is concerned with a class of mathematical programming problems referred to as Mixed-Integer Bilinear Programming Problems. This class of problems, which arises in production, location-allocation, and distribution-application contexts, may be considered as a discrete version of the well-known Bilinear Programming Problem in that one set of decision variables is restricted to be binary valued. The structure of this problem is studied, and special cases wherein it is readily solvable are identified. For the more general case, a new linearization technique is introduced and demonstrated to lead to a tighter linear programming relaxation than obtained through available linearization methods. Based on this linearization, a composite Lagrangian relaxation-implicit enumeration-cutting plane algorithm is developed. Extensive computational experience is provided to test the efficiency of various algorithmic strategies and the effects of problem data on the computational effort of the proposed algorithm.
The solution strategy developed for the Mixed-Integer Bilinear Programming Problem may be applied, with suitable modifications,. to other classes of mathematical programming problems: in particular, to the Zero-One Quadratic Programming Problem. In what may be considered as an extension to the work performed on the Mixed-Integer Bilinear Programming Problem, a solution strategy based on an equivalent linear reformulation is developed for the Zero-One Quadratic Programming Problem. The strategy is essentially an implicit enumeration algorithm which employs Lagrangian relaxation, Benders' cutting planes, and local explorations. Computational experience for this problem class is provided to justify the worth of the proposed linear reformulation and algorithm. / Ph. D.
|
125 |
An implementation of a software engineering project management system : a tool for a prototype software engineering environmentCastle, Oliver Bert January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
|
126 |
Exploiting data parallelism in artificial neural networks with HaskellHeartsfield, Gregory Lynn 2009 August 1900 (has links)
Functional parallel programming techniques for feed-forward artificial neural networks trained using backpropagation learning are analyzed. In particular, the Data Parallel Haskell extension to the Glasgow Haskell Compiler is considered as a tool for achieving data parallelism. We find much potential and elegance in this method, and determine that a sufficiently large workload is critical in achieving real gains. Several additional features are recommended to increase usability and improve results on small datasets. / text
|
127 |
A semantic approach to automatic program improvementDarlington, John January 1972 (has links)
The programs that are easiest to write and understand are often not the most efficient. This thesis gives methods of converting programs of the former type to those of the latter type; this involves converting definitions of algorithms given as recursion equations using high level primitives into lower level flow chart programs. The main activities involved are recursion removal (c.f. Strong), loop elimination, and the overwriting of shared structures. We have concentrated on the semantics, rather than the syntax, of the programs we are transforming and we have used techniques developed in work done on proving the correctness of programs. The transformations are done in a hierarchical manner and can be regarded as compiling a program defined in a structured manner (Dijkstra) to produce an efficient low level program that simulates it. We describe the implementation of a system that allows the user to specify algorithms in a simple set language and converts them to flow chart programs in either a bitstring or list processing language. Both of these lower languages allow the sharing of structures. The principles are applicable to other domains and we describe how our system can be applied more generally.
|
128 |
ISSUES IN DISTRIBUTED PROGRAMMING LANGUAGES: THE EVOLUTION OF SR (CONCURRENT).Olsson, Ronald Arthur January 1986 (has links)
This dissertation examines fundamental issues that face the designers of any distributed programming language. It considers how programs are structured, how processes communicate and synchronize, and how hardware failures are represented and handled. We discuss each of these issues and argue for a particular approach based on our application domain: distributed systems (such as distributed operating systems) and distributed user applications. We conclude that a language for such applications should include the following mechanisms: dynamic modules, shared variables (within a module), dynamic processes, synchronous and asynchronous forms of message passing, rendezvous, concurrent invocation, and early reply. We then describe the current SR language, which has evolved considerably based on our experience. SR provides the above mechanisms in a way that is expressive yet simple. SR resolves the tension between expressiveness and simplicity by providing a variety of mechanisms based on only a few underlying concepts. The main language constructs are still resources and operations. Resources encapsulate processes and the variables they share; operations provide the primary mechanism for process interaction. One way in which SR has changed is that both resources and processes are now created dynamically. Another change is that all the common mechanisms for process interaction--local and remote procedure call, rendezvous, dynamic process creation, asynchronous message passing, and semaphores--are now supported by a novel integration of the mechanisms for invoking and servicing operations. Many small and several larger examples illustrate SR's mechanisms and the interplay between them; these examples also demonstrate the language's expressiveness and flexibility. We then describe our implementation of SR. The compiler, linker, and run-time support are summarized. We then focus on how the generated code and run-time support interact to provide dynamic resources and to generate and service invocations. We also describe optimizations for certain operations. Measurements of the implementation's size and cost are given. The implementation has been in use since November 1985 and is currently being improved. Finally, we justify SR's syntax and semantics and examine how its mechanisms compare to other approaches to distributed programming. We also discuss how SR balances expressiveness, simplicity, and efficiency.
|
129 |
Automatic complexity analysis of logic programs.Lin, Nai-Wei. January 1993 (has links)
This dissertation describes research toward automatic complexity analysis of logic programs and its applications. Automatic complexity analysis of programs concerns the inference of the amount of computational resources consumed during program execution, and has been studied primarily in the context of imperative and functional languages. This dissertation extends these techniques to logic programs so that they can handle nondeterminism, namely, the generation of multiple solutions via backtracking. We describe the design and implementation of a (semi)-automatic worst-case complexity analysis system for logic programs. This system can conduct the worst-case analysis for several complexity measures, such as argument size, number of solutions, and execution time. This dissertation also describes an application of such analyses, namely, a runtime mechanism for controlling task granularity in parallel logic programming systems. The performance of parallel systems often starts to degrade when the concurrent tasks in the systems become too fine-grained. Our approach to granularity control is based on time complexity information. With this information, we can compare the execution cost of a procedure with the average process creation overhead of the underlying system to determine at runtime if we should spawn a procedure call as a new concurrent task or just execute it sequentially. Through experimental measurements, we show that this mechanism can substantially improve the performance of parallel systems in many cases. This dissertation also presents several source-level program transformation techniques for optimizing the evaluation of logic programs containing finite-domain constraints. These techniques are based on number-of-solutions complexity information. The techniques include planning the evaluation order of subgoals, reducing the domain of variables, and planning the instantiation order of variable values. This application allows us to solve a problem by starting with a more declarative but less efficient program, and then automatically transforming it into a more efficient program. Through experimental measurements we show that these program transformation techniques can significantly improve the efficiency of the class of programs containing finite-domain constraints in most cases.
|
130 |
DOUBLE-BASIS SIMPLEX METHOD FOR LARGE SCALE LINEAR PROGRAMMING.PROCTOR, PAUL EDWARD. January 1982 (has links)
The basis handling procedures of the simplex method are formulated in terms of a "double basis". That is, the basis is factored as (DIAGRAM OMITTED...PLEASE SEE DAI) where ‘B, the pseudobasis matrix, is the basis matrix at the last refactorization. P and Q are permutation matrices. Forward and backward transformations and update are presented for each of two implementations of the double-basis method. The first implementation utilizes an explicit G⁻¹ matrix. The second uses a sparse LU factorization of G. Both are based on Marsten's modularized XMP package, in which standard simplex method routines are replaced by corresponding double-basis method routines. XMP and the LU double-basis method implementation employ Reid's LA05 routines for handling sparse linear programming bases. All calculations are done without reference to the H matrix. Therefore, the update is restricted to G, which has dimension limited by the refactorization frequency, and P and Q, which are held as lists. This can lead to a saving in storage space and updating time. The cost is that time for transformations will be about double. Computational comparisons of storage and speed performance are made with the standard simplex method on problems of up to 1480 constraints. It is found that, generally, the double-basis method performs best on larger, denser problems. Density seems to be the more important factor, and the problems with large nonzero growth between refactorizations are the better ones for the double-basis method. Storage saving in the basis inverse representation versus the standard method is as high as 36%, whereas the double-basis run times are 1.2 or more times as great.
|
Page generated in 0.0739 seconds