Spelling suggestions: "subject:"arallel processing"" "subject:"aparallel processing""
311 |
Complexity issues in general purpose parallel computingChin, Douglas Andrew January 1991 (has links)
In recent years, powerful theoretical techniques have been developed for supporting communication, synchronization and fault tolerance in general purpose parallel computing. The proposition of this thesis is that different techniques should be used to support different algorithms. The determining factor is granularity, or the extent to which an algorithm uses long blocks for communication between processors. We consider the Block PRAM model of Aggarwal, Chandra and Snir, a synchronous model of parallel computation in which the processors commu- nicate by accessing a shared memory. In the Block PRAM model, there is a time cost for each access by a processor to a block of locations in the shared memory. This feature of the model encourages the use of long blocks for communication. In the thesis we present Block PRAM algorithms and lower bounds for specific problems on arrays, lists, expression trees, graphs, strings, binary trees and butterflies. These results introduce useful basic techniques for parallel computation in practice, and provide a classification of problems and algorithms according to their granularity. Also presented are optimal algorithms for universal hashing and skewing, which are techniques for sup- porting conflict-free memory access in general- and special-purpose parallel computations, respectively. We explore the Block PRAM model as a theoretical basis for the design of scalable general purpose parallel computers. Several simulation results are presented which show the Block PRAM model to be comparable to, and competitive with, other models that have been proposed for this role. Two major advantages of machines based on the Block PRAM model is that they are able to preserve the granularity properties of individual algorithms and can efficiently incorporate a significant degree of fault tolerance. The thesis also discusses methods for the design of algorithms that do not use synchronization. We apply these methods to define fast circuits for several fundamental Boolean functions.
|
312 |
Optimization and enhancement strategies for data flow systemsDunkelman, Laurence William. January 1984 (has links)
The data flow machine, which represents a radical departure from the conventional von Neumann architecture, shows great potential as a candidate for the future generation of computers. The difficulty in the usage of data structures as well as the effective exploitation of parallelism are two issues which have not as yet been fully resolved within the framework of the data flow model. / This thesis concentrates on these important problems in the following manner. Firstly, the role memory can play in a data flow system is examined. A new concept called "active memory" is introduced together with various new actors. It is shown that these enhancements make it possible to implement a limited form of shared memory which readily supports the use of data structures. / Secondly, execution performance of data flow programs is examined in the context of conditional statements. Transformations applied to the data flow graph are presented which increase the degree of parallelism. Analysis, both theoretical and empirical, is performed, showing that substantial improvements are obtained with a minimal impact on other system components.
|
313 |
Collaterality and parallel processing in Algol 68Miller, Robert James. January 1974 (has links)
No description available.
|
314 |
An experimental investigation of scheduling non-identical parallel processors with sequence-dependent set-up times and due datesSmith, Terrence A. 26 February 1993 (has links)
An experimental investigation of factors effecting
scheduling a system of parallel, non-identical processors
using a series of experimental designs was carried out.
System variables included were processor capacities
relationships, sequencing and assignment rules, job size, and
product demand distributions. The effect of the variables was
measured by comparing mean flow times, proportion of jobs
tardy, and processor utilization spread.
Results of the study found that system loading and set-up
times play a major role in system performance. Grouping jobs
by product will minimize set-up times and hence mean flow time
and tardiness at the expense of controlling individual
processor usage. Factors involving processor capacities and
assignment rules tend to have no affect on any of the system
performance measures. Variability in job size and product
demand tended to give flexibility in controlling individual
processor utilization. / Graduation date: 1993
|
315 |
A parallel architecture for image and signal processing /Chalmers, Andrew. Unknown Date (has links)
Thesis (MEng) -- University of South Australia, 1994
|
316 |
General purpose parallel machine design and analysis /Moseley, Philip A. Unknown Date (has links)
Thesis (MAppSc) -- University of South Australia, 1993
|
317 |
Scheduling in metacomputing systems / Heath A. James.James, Heath A. (Heath Alexander) January 1999 (has links)
Bibliography: leaves 211-234. / xiv, 234 p. : ill. ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / The general problem of scheduling is investigated, with focus on jobs consisting of both independent and dependent programs. Using the constraint of restricted placement of programs, presents a scheduling system that produces heuristically good execution schedules in the absence of complete global system state information. / Thesis (Ph.D.)--University of Adelaide, Dept. of Computer Science, 1999
|
318 |
Design and evaluation of a memory architecture for a parallel matrix processor array / Nicholas M. Betts.Betts, Nicholas M. January 2000 (has links)
CD-ROM in pocket on back end paper. / Bibliography: leaves 254-259. / xiv, 259 leaves : ill. ; 30 cm + 1 computer optical disc (4 3/4 in.) / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / Proposes a specialized matrix processor architecture that targets numerically intensive algorithms that can be cast in matrix terms. / Thesis (Ph.D.)--University of Adelaide, Dept. of Electrical and Electronic Engineering, Advisory Centre for University Education, 2000
|
319 |
Design and evaluation of a memory architecture for a parallel matrix processor array / Nicholas M. Betts.Betts, Nicholas M. January 2000 (has links)
CD-ROM in pocket on back end paper. / Bibliography: leaves 254-259. / xiv, 259 leaves : ill. ; 30 cm + 1 computer optical disc (4 3/4 in.) / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / Proposes a specialized matrix processor architecture that targets numerically intensive algorithms that can be cast in matrix terms. / Thesis (Ph.D.)--University of Adelaide, Dept. of Electrical and Electronic Engineering, Advisory Centre for University Education, 2000
|
320 |
A light-weight middleware framework for fault-tolerant and secure distributed applicationsBaird, Ian Jacob, January 2007 (has links) (PDF)
Thesis (M.S.)--University of Missouri--Rolla, 2007. / Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed April 22, 2008) Includes bibliographical references (p. 70-71).
|
Page generated in 0.085 seconds