Return to search

Evaluation of Parallel Programming Standards For  Embedded High Performance Computing

The aim of this project is to evaluate parallel programming standards for embedded high performance computing. There is a huge demand for high computational speed and performance in the present radar signal processing, so more processors are needed to get enough performance. One way of getting high performance is by dividing the work on multiple processors. At the same time, it has to get low communication overhead and good speedup. This has been done by using parallel computing languages such as OpenMP and MPI.We use these parallel programming languages on radar signal benchmark which is similar to many tasks in radar signal processing. For running OpenMP, a shared memory system SUNFIRE E2900 is used and for MPI, a SUNFIRE E2900, containing 8 nodes which uses SUN HPC cluster tools v5 is used. The OpenMP program shows pretty good speedup up to 5 processors, there after an increase in communication overhead is observed. MPI has shown low communication overhead at the beginning but got decreases when the numbers of processors were increased. Both OpenMP and MPI show similar aspects, at certain limit as the number of processors are increased there is decreasing trend in efficiency and increase in communication overhead. According to our results, OpenMP is a relatively easy to use program when compared to MPI. When using MPI it is up to the programmer to make explicit calls in order to parallelize.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:hh-6047
Date January 2010
CreatorsJames Emmanuel Roy, Muggalla, Garimella, Pradeep
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/masterThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.0019 seconds