Return to search

Structuring and supporting programs on parallel computers

Distributed-memory multicomputers will not find acceptance outwith the specialised field of high-performance numerical applications until programs written for them using systems similar to those found on conventional uniprocessors can be run with an efficiency comparable to that achieved on those uniprocessors. This work argues that the key to constructing upwardly-compatible programming systems for multicomputers based on message passing which are both efficient and usable, and which allow effective monitoring, is to require those systems to be structured, in the same way that modern programming languages require programs to be structured. It is further argued that even a well-structured message-passing system is too low-level for most applications programming, and that some more abstract system is required. The merits of one such abstraction, called generative communciation, are considered, and suggestions made for enriching standard implementations in order to improve their usability and efficiency. Finally, it is argued that the performance of any programming system for distributed-memory multicomputers, regardless of its degree of abstraction, is largely determined by the degree to which it eliminates or avoids contention. A technique for doing this, based on opportunistic combining networks, is introduced, and its effect on performance investigated using simulations.

Identiferoai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:663889
Date January 1992
CreatorsWilson, Gregory V.
PublisherUniversity of Edinburgh
Source SetsEthos UK
Detected LanguageEnglish
TypeElectronic Thesis or Dissertation
Sourcehttp://hdl.handle.net/1842/12151

Page generated in 0.0119 seconds