Return to search

Scalability-Driven Approaches to Key Aspects of the Message Passing Interface for Next Generation Supercomputing

The Message Passing Interface (MPI), which dominates the supercomputing programming environment,
is used to orchestrate and fulfill communication in High Performance Computing (HPC).

How far HPC programs can scale depends in large part
on the ability to achieve fast communication; and to overlap communication with computation or
communication with communication.

This dissertation proposes a new asynchronous solution to the nonblocking Rendezvous protocol used
between pairs of processes to transfer large payloads. On top of enforcing communication/computation
overlapping in a comprehensive way, the proposal trumps existing network device-agnostic
asynchronous solutions by being memory-scalable and by avoiding brute force strategies.

Achieving overlapping between communication and computation is important; but each
communication is also expected to generate minimal latency. In that respect, the
processing of the queues meant to hold messages pending reception inside the MPI middleware is
expected to be fast. Currently though, that processing slows down when program scales grow.
This research presents a novel scalability-driven message queue whose processing skips
altogether large portions of queue items that are deterministically guaranteed to lead to
unfruitful searches. For having little sensitivity to program sizes, the proposed message
queue maintains a very good performance,
on top of displaying a low and flattening memory footprint growth pattern.

Due to the blocking nature of its required synchronizations, the one-sided
communication model of MPI creates both communication/computation and communication/communication
serializations. This research fixes these issues and latency-related inefficiencies documented for
MPI one-sided communications by proposing completely nonblocking and non-serializing versions for
those synchronizations. The improvements, meant for consideration in a future MPI standard,
also allow new classes of programs to be more efficiently expressed in MPI.

Finally, a persistent distributed service is designed over MPI to show its impacts
at large scales beyond communication-only activities.
MPI is analyzed in situations of resource exhaustion, partial failure and heavy use of internal
objects for communicating and non-communicating routines. Important scalability issues are revealed
and solution approaches are put forth. / Thesis (Ph.D, Electrical & Computer Engineering) -- Queen's University, 2014-05-23 15:08:58.56

Identiferoai:union.ndltd.org:LACETR/oai:collectionscanada.gc.ca:OKQ.1974/12194
Date23 May 2014
CreatorsZounmevo, Ayi Judicael
ContributorsQueen's University (Kingston, Ont.). Theses (Queen's University (Kingston, Ont.))
Source SetsLibrary and Archives Canada ETDs Repository / Centre d'archives des thèses électroniques de Bibliothèque et Archives Canada
LanguageEnglish, English
Detected LanguageEnglish
TypeThesis
RightsThis publication is made available by the authority of the copyright owner solely for the purpose of private study and research and may not be copied or reproduced except as permitted by the copyright laws without written authority from the copyright owner.
RelationCanadian theses

Page generated in 0.0026 seconds