Return to search

High-performance data-parallel input/output

Existing parallel file systems are proving inadequate in two important arenas:
programmability and performance. Both of these inadequacies can largely be traced
to the fact that nearly all parallel file systems evolved from Unix and rely on a Unix-oriented,
single-stream, block-at-a-time approach to file I/O. This one-size-fits-all
approach to parallel file systems is inadequate for supporting applications running
on distributed-memory parallel computers.
This research provides a migration path away from the traditional approaches
to parallel I/O at two levels. At the level seen by the programmer, we show how
file operations can be closely integrated with the semantics of a parallel language.
Principles for this integration are illustrated in their application to C*, a virtual-processor-
oriented language. The result is that traditional C file operations with
familiar semantics can be used in C* where the programmer works--at the virtual
processor level. To facilitate high performance within this framework, machine-independent
modes are used. Modes change the performance of file operations,
not their semantics, so programmers need not use ambiguous operations found in
many parallel file systems. An automatic mode detection technique is presented
that saves the programmer from extra syntax and low-level file system details. This
mode detection system ensures that the most commonly encountered file operations
are performed using high-performance modes.
While the high-performance modes allow fast collective movement of file data,
they must include optimizations for redistribution of file data, a common operation
in production scientific code. This need is addressed at the file system level, where
we provide enhancements to Disk-Directed I/O for redistributing file data. Two
enhancements are geared to speeding fine-grained redistributions. One uses a two-phase,
or indirect, approach to redistributing data among compute nodes. The
other relies on I/O nodes to guide the redistribution by building packets bound for
compute nodes. We model the performance of these enhancements and determine
the key parameters determining when each approach should be used. Finally, we
introduce the notion of collective prefetching and identify its performance benefits
and implementation tradeoffs. / Graduation date: 1997

Identiferoai:union.ndltd.org:ORGSU/oai:ir.library.oregonstate.edu:1957/34460
Date19 July 1996
CreatorsMoore, Jason Andrew
ContributorsQuinn, Michael J.
Source SetsOregon State University
Languageen_US
Detected LanguageEnglish
TypeThesis/Dissertation

Page generated in 0.0022 seconds