Spelling suggestions: "subject:"mpi_allgather"" "subject:"mpi_iallgather""
1 |
Improving the Performance of Selected MPI Collective Communication Operations on InfiniBand NetworksViertel, Carsten 23 September 2007 (has links) (PDF)
The performance of collective communication
operations is one of the deciding factors in
the overall performance of a MPI application.
Open MPI's component architecture offers an easy
way to implement new algorithms for collective
operations, but current implementations use the
point-to-point components to access the
InfiniBand network. Therefore it is tried to
improve the performance of a collective component
by accessing the InfiniBand network directly.
This should avoid overhead and make it possible
to tune the algorithms to this specific network.
The first part of this work gives a short overview
of the InfiniBand Architecture and Open MPI. In
the next part several models for parallel
computation are analyzed. Afterwards various
algorithms for the MPI_Scatter, MPI_Gather and
MPI_Allgather operations are presented. The
theoretical performance of the algorithms is
analyzed with the LogfP and LogGP models.
Selected algorithms are implemented
as part of an Open MPI collective component.
Finally the performance of different algorithms and
different MPI implementations is compared. The test
results show, that the performance of the
operations could be improved for several message
and communicator size ranges.
|
2 |
Improving the Performance of Selected MPI Collective Communication Operations on InfiniBand NetworksViertel, Carsten 30 April 2007 (has links)
The performance of collective communication
operations is one of the deciding factors in
the overall performance of a MPI application.
Open MPI's component architecture offers an easy
way to implement new algorithms for collective
operations, but current implementations use the
point-to-point components to access the
InfiniBand network. Therefore it is tried to
improve the performance of a collective component
by accessing the InfiniBand network directly.
This should avoid overhead and make it possible
to tune the algorithms to this specific network.
The first part of this work gives a short overview
of the InfiniBand Architecture and Open MPI. In
the next part several models for parallel
computation are analyzed. Afterwards various
algorithms for the MPI_Scatter, MPI_Gather and
MPI_Allgather operations are presented. The
theoretical performance of the algorithms is
analyzed with the LogfP and LogGP models.
Selected algorithms are implemented
as part of an Open MPI collective component.
Finally the performance of different algorithms and
different MPI implementations is compared. The test
results show, that the performance of the
operations could be improved for several message
and communicator size ranges.
|
Page generated in 0.0235 seconds