1 |
Principal Design Criteria Influencing the Performance of a Portable, High Performance Parallel I/O ImplementationRajaram, Kumaran 11 May 2002 (has links)
MPI-IO, the parallel I/O functionality of MPI-2, is a portable interface designed specifically to achieve high-performance. This thesis proposes fundamental design criteria influencing the performance of a portable high performance I/O middleware. This thesis hypothesizes that overlap of I/O and computation and agglomeration of I/O requests based on an application's access pattern improve the performance of a portable parallel I/O implementation. The work included the development of MercutIO, a complete, portable, high performance MPI-IO implementation. MercutIO achieves portability through the Bulldog Abstract File System, a portable, efficient non-collective I/O interface, also developed in this thesis work. A new data access model based on non-blocking semantics is presented here. Two new I/O metrics (degree of overlapping and degree of non-contiguity) as well as parallel I/O benchmarks essential in the performance appraisal of a parallel I/O implementation are introduced in this thesis.
|
2 |
Adapting Remote Direct Memory Access Based File System to Parallel Input-/OutputVelusamy, Vijay 13 December 2003 (has links)
Traditional file access interfaces rely on ubiquitous transports that impose severe restrictions on performance and prove insufficient for adaptation to parallel Input/Output (I/O). Remote Direct Memory Access based (RDMA-based) approaches are aimed at moving data between different process address spaces with streamlined mediation and reduced involvement of the operating system using synchronization semantics that are different from ubiquitous transports. This thesis studies the adaptability of RDMA-based transports to parallel I/O. Combining RDMA semantics with parallel I/O leads to overhead reduction by overlapping communication and computation and by bandwidth enhancement. Although parallel I/O tends to increase latency in certain cases, use of RDMA techniques mitigate on this effect.
|
3 |
A scheduling framework for dynamically resizable parallel applicationsSwaminathan, Gautam 18 February 2005 (has links)
Applications in science and engineering require large parallel systems in order to solve computational problems within a reasonable timeframe. These applications can benefit from dynamic resizing during the course of their execution. Dynamic resizing enables fine-grained control over resource allocation to jobs and results in better system throughput and job turn around time. We have implemented a framework that enabled dynamic resizing of MPI applications. Our framework uses the recently released MPI-2 standard that enables dynamic resizing. The work described in this thesis is part of a larger effort to design and implement a system for supporting and leveraging dynamically resizable parallel applications. We provide a scheduling framework, an API for dynamic resizing and libraries to efficiently redistribute data to new processor topologies. / Master of Science
|
4 |
Escalonamento Work-Stealing de programas Divisão-e-Conquista com MPI-2 / Scheduling Divide-and-Conquer programs by Work-Stealing with MPI-2Pezzi, Guilherme Peretti January 2006 (has links)
Com o objetivo de ser portável e eficiente em arquiteturas HPC atuais, a execução de um programa paralelo deve ser adaptável. Este trabalho mostra como isso pode ser atingido utilizando MPI, através de criação dinâmica de processos, integrada com programação Divisão-e-Conquista e uma estratégia Work-Stealing para balancear os processos MPI, em ambientes heterogêneos e/ou dinâmicos, em tempo de execução. Este trabalho explica como implementar uma aplicação segundo o modelo de Divisão-e-Conquista com MPI, bem como a implementação de uma estratégia Work-Stealing. São apresentados resultados experimentais baseados em uma aplicação sintética, o problema das N-Rainhas (N-Queens). Valida-se tanto a adaptabilidade e a eficiência do código. Os resultados mostram que é possível utilizar um padrão amplamente difundido como o MPI, mesmo em plataformas de HPC não tão homogêneas como um cluster. / In order to be portable and efficient on modern HPC architectures, the execution of a parallel program must be adaptable. This work shows how to achieve this in MPI, by the dynamic creation of processes, coupled with Divide-and-Conquer programming and a Work-Stealing strategy to balance the MPI processes, in a heterogeneous and/or dynamic environment, at runtime. The application of Divide and Conquer with MPI is explained, as well as the implementation of a Work-Stealing strategy. Experimental results are provided, based on a synthetic application, the N-Queens computation. Both the adaptability of the code and its efficiency are validated. The results show that it is possible to use widely spread standards such as MPI, even in parallel HPC platforms that are not as homogeneous as a Cluster.
|
5 |
Escalonamento Work-Stealing de programas Divisão-e-Conquista com MPI-2 / Scheduling Divide-and-Conquer programs by Work-Stealing with MPI-2Pezzi, Guilherme Peretti January 2006 (has links)
Com o objetivo de ser portável e eficiente em arquiteturas HPC atuais, a execução de um programa paralelo deve ser adaptável. Este trabalho mostra como isso pode ser atingido utilizando MPI, através de criação dinâmica de processos, integrada com programação Divisão-e-Conquista e uma estratégia Work-Stealing para balancear os processos MPI, em ambientes heterogêneos e/ou dinâmicos, em tempo de execução. Este trabalho explica como implementar uma aplicação segundo o modelo de Divisão-e-Conquista com MPI, bem como a implementação de uma estratégia Work-Stealing. São apresentados resultados experimentais baseados em uma aplicação sintética, o problema das N-Rainhas (N-Queens). Valida-se tanto a adaptabilidade e a eficiência do código. Os resultados mostram que é possível utilizar um padrão amplamente difundido como o MPI, mesmo em plataformas de HPC não tão homogêneas como um cluster. / In order to be portable and efficient on modern HPC architectures, the execution of a parallel program must be adaptable. This work shows how to achieve this in MPI, by the dynamic creation of processes, coupled with Divide-and-Conquer programming and a Work-Stealing strategy to balance the MPI processes, in a heterogeneous and/or dynamic environment, at runtime. The application of Divide and Conquer with MPI is explained, as well as the implementation of a Work-Stealing strategy. Experimental results are provided, based on a synthetic application, the N-Queens computation. Both the adaptability of the code and its efficiency are validated. The results show that it is possible to use widely spread standards such as MPI, even in parallel HPC platforms that are not as homogeneous as a Cluster.
|
6 |
Escalonamento Work-Stealing de programas Divisão-e-Conquista com MPI-2 / Scheduling Divide-and-Conquer programs by Work-Stealing with MPI-2Pezzi, Guilherme Peretti January 2006 (has links)
Com o objetivo de ser portável e eficiente em arquiteturas HPC atuais, a execução de um programa paralelo deve ser adaptável. Este trabalho mostra como isso pode ser atingido utilizando MPI, através de criação dinâmica de processos, integrada com programação Divisão-e-Conquista e uma estratégia Work-Stealing para balancear os processos MPI, em ambientes heterogêneos e/ou dinâmicos, em tempo de execução. Este trabalho explica como implementar uma aplicação segundo o modelo de Divisão-e-Conquista com MPI, bem como a implementação de uma estratégia Work-Stealing. São apresentados resultados experimentais baseados em uma aplicação sintética, o problema das N-Rainhas (N-Queens). Valida-se tanto a adaptabilidade e a eficiência do código. Os resultados mostram que é possível utilizar um padrão amplamente difundido como o MPI, mesmo em plataformas de HPC não tão homogêneas como um cluster. / In order to be portable and efficient on modern HPC architectures, the execution of a parallel program must be adaptable. This work shows how to achieve this in MPI, by the dynamic creation of processes, coupled with Divide-and-Conquer programming and a Work-Stealing strategy to balance the MPI processes, in a heterogeneous and/or dynamic environment, at runtime. The application of Divide and Conquer with MPI is explained, as well as the implementation of a Work-Stealing strategy. Experimental results are provided, based on a synthetic application, the N-Queens computation. Both the adaptability of the code and its efficiency are validated. The results show that it is possible to use widely spread standards such as MPI, even in parallel HPC platforms that are not as homogeneous as a Cluster.
|
7 |
Designing Support For MPI-2 Programming Interfaces On Modern InterConnectsGangadharappa, Tejus A. 02 September 2009 (has links)
No description available.
|
8 |
Designing Scalable and High Performance One Sided Communication Middleware for Modern InterconnectsSanthanaraman, Gopalakrishnan 02 September 2009 (has links)
No description available.
|
Page generated in 0.0261 seconds