Optimized implementations of blocking and nonblocking collective operations are most important for scalable high-performance applications. Offloading such collective operations into the communication layer can improve performance and asynchronous progression of the operations. However, it is most important that such offloading schemes remain flexible in order to support user-defined (sparse neighbor) collective communications. In this work, we describe an operating system kernel-based architecture for implementing an interpreter for the flexible Group Operation Assembly Language (GOAL) framework to offload collective communications. We describe an optimized scheme to store the schedules that define the collective operations and show an extension to profile the performance of the kernel layer. Our microbenchmarks demonstrate the effectiveness of the approach and we show performance improvements over traditional progression in user-space. We also discuss complications with the design and offloading strategies in general.:1 Introduction
1.1 Related Work
2 The GOAL API
2.1 API Conventions
2.2 Basic GOAL Functionality
2.2.1 Initialization
2.2.2 Graph Creation
2.2.3 Adding Operations
2.2.4 Adding Dependencies
2.2.5 Scratchpad Buffer
2.2.6 Schedule Compilation
2.2.7 Schedule Execution
2.3 GOAL-Extensions
3 ESP Transport Layer
3.1 Receive Handling
3.2 Transfer Management
3.2.1 Known Problems
4 The Architecture of ESPGOAL
4.1 Control Flow
4.1.1 Loading the Kernel Module
4.1.2 Adding a Communicator
4.1.3 Starting a Schedule
4.1.4 Schedule Progression
4.1.5 Progression by ESP
4.1.6 Unloading the Kernel Module
4.2 Data Structures
4.2.1 Starting a Schedule
4.2.2 Transfer Management
4.2.3 Stack Overflow Avoidance
4.3 Interpreting a GOAL Schedule
5 Implementing Collectives in GOAL
5.1 Recursive Doubling
5.2 Bruck's Algorithm
5.3 Binomial Trees
5.4 MPI_Barrier
5.5 MPI_Gather
6 Benchmarks
6.1 Testbed
6.2 Interrupt coalescing parameters
6.3 Benchmarking Point to Point Latency
6.4 Benchmarking Local Operations
6.5 Benchmarking Collective Communication Latency
6.6 Benchmarking Collective Communication Host Overhead
6.7 Comparing Different Ways to use Ethernet NICs
7 Conclusions and Future Work
8 Acknowledgments
Identifer | oai:union.ndltd.org:DRESDEN/oai:qucosa:de:qucosa:19496 |
Date | 01 June 2011 |
Creators | Schneider, Timo, Eckelmann, Sven |
Contributors | Mittenzwey, Nico, Rehm, Wolfgang, Technische Universität Chemnitz |
Source Sets | Hochschulschriftenserver (HSSS) der SLUB Dresden |
Language | English |
Detected Language | English |
Type | doc-type:StudyThesis, info:eu-repo/semantics/StudyThesis, doc-type:Text |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0022 seconds