Spelling suggestions: "subject:"passively parallel processors"" "subject:"massive parallel processors""
1 |
GPU acceleration of matrix-based methods in computational electromagneticsLezar, Evan 03 1900 (has links)
Thesis (PhD (Electrical and Electronic Engineering))--University of Stellenbosch, 2011. / ENGLISH ABSTRACT: This work considers the acceleration of matrix-based computational electromagnetic (CEM)
techniques using graphics processing units (GPUs). These massively parallel processors have
gained much support since late 2006, with software tools such as CUDA and OpenCL greatly
simplifying the process of harnessing the computational power of these devices. As with any
advances in computation, the use of these devices enables the modelling of more complex problems,
which in turn should give rise to better solutions to a number of global challenges faced
at present.
For the purpose of this dissertation, CUDA is used in an investigation of the acceleration
of two methods in CEM that are used to tackle a variety of problems. The first of these is the
Method of Moments (MOM) which is typically used to model radiation and scattering problems,
with the latter begin considered here. For the CUDA acceleration of the MOM presented here,
the assembly and subsequent solution of the matrix equation associated with the method are
considered. This is done for both single and double precision
oating point matrices.
For the solution of the matrix equation, general dense linear algebra techniques are used,
which allow for the use of a vast expanse of existing knowledge on the subject. This also means
that implementations developed here along with the results presented are immediately applicable
to the same wide array of applications where these methods are employed.
Both the assembly and solution of the matrix equation implementations presented result in
signi cant speedups over multi-core CPU implementations, with speedups of up to 300x and
10x, respectively, being measured. The implementations presented also overcome one of the
major limitations in the use of GPUs as accelerators (that of limited memory capacity) with
problems up to 16 times larger than would normally be possible being solved.
The second matrix-based technique considered is the Finite Element Method (FEM), which
allows for the accurate modelling of complex geometric structures including non-uniform dielectric
and magnetic properties of materials, and is particularly well suited to handling bounded
structures such as waveguide. In this work the CUDA acceleration of the cutoff and dispersion
analysis of three waveguide configurations is presented. The modelling of these problems using
an open-source software package, FEniCS, is also discussed.
Once again, the problem can be approached from a linear algebra perspective, with the
formulation in this case resulting in a generalised eigenvalue (GEV) problem. For the problems
considered, a total solution speedup of up to 7x is measured for the solution of the generalised
eigenvalue problem, with up to 22x being attained for the solution of the standard eigenvalue
problem that forms part of the GEV problem. / AFRIKAANSE OPSOMMING: In hierdie werkstuk word die versnelling van matriksmetodes in numeriese elektromagnetika
(NEM) deur die gebruik van grafiese verwerkingseenhede (GVEe) oorweeg. Die gebruik van
hierdie verwerkingseenhede is aansienlik vergemaklik in 2006 deur sagteware pakette soos CUDA
en OpenCL. Hierdie toestelle, soos ander verbeterings in verwerkings vermoe, maak dit moontlik
om meer komplekse probleme op te los. Hierdie stel wetenskaplikes weer in staat om globale
uitdagings beter aan te pak.
In hierdie proefskrif word CUDA gebruik om ondersoek in te stel na die versnelling van twee
metodes in NEM, naamlik die Moment Metode (MOM) en die Eindige Element Metode (EEM).
Die MOM word tipies gebruik om stralings- en weerkaatsingsprobleme op te los. Hier word slegs
na die weerkaatsingsprobleme gekyk. CUDA word gebruik om die opstel van die MOM matriks
en ook die daaropvolgende oplossing van die matriksvergelyking wat met die metode gepaard
gaan te bespoedig.
Algemene digte lineere algebra tegnieke word benut om die matriksvergelykings op te los.
Dit stel die magdom bestaande kennis in die vagebied beskikbaar vir die oplossing, en gee ook
aanleiding daartoe dat enige implementasies wat ontwikkel word en resultate wat verkry word
ook betrekking het tot 'n wye verskeidenheid probleme wat die lineere algebra metodes gebruik.
Daar is gevind dat beide die opstelling van die matriks en die oplossing van die matriksvergelyking
aansienlik vinniger is as veelverwerker SVE implementasies. 'n Verselling van tot 300x
en 10x onderkeidelik is gemeet vir die opstel en oplos fases. Die hoeveelheid geheue beskikbaar
tot die GVE is een van die belangrike beperkinge vir die gebruik van GVEe vir groot probleme.
Hierdie beperking word hierin oorkom en probleme wat selfs 16 keer groter is as die GVE se
beskikbare geheue word geakkommodeer en suksesvol opgelos.
Die Eindige Element Metode word op sy beurt gebruik om komplekse geometriee asook nieuniforme
materiaaleienskappe te modelleer. Die EEM is ook baie geskik om begrensde strukture
soos golfgeleiers te hanteer. Hier word CUDA gebruik of om die afsny- en dispersieanalise van
drie gol
eierkonfigurasies te versnel. Die implementasie van hierdie probleme word gedoen deur
'n versameling oopbronkode wat bekend staan as FEniCS, wat ook hierin bespreek word.
Die probleme wat ontstaan in die EEM kan weereens vanaf 'n lineere algebra uitganspunt
benader word. In hierdie geval lei die formulering tot 'n algemene eiewaardeprobleem. Vir die
gol
eier probleme wat ondersoek word is gevind dat die algemene eiewaardeprobleem met tot 7x
versnel word. Die standaard eiewaardeprobleem wat 'n stap is in die oplossing van die algemene
eiewaardeprobleem is met tot 22x versnel.
|
2 |
Automatic data distribution for massively parallel processorsGarcía Almiñana, Jordi 16 April 1997 (has links)
Massively Parallel Processor systems provide the required computational power to solve most large scale High Performance Computing applications. Machines with physically distributed memory allow a cost-effective way to achieve this performance, however, these systems are very diffcult to program and tune. In a distributed-memory organization each processor has direct access to its local memory, and indirect access to the remote memories of other processors. But the cost of accessing a local memory location can be more than one order of magnitude faster than accessing a remote memory location. In these systems, the choice of a good data distribution strategy can dramatically improve performance, although different parts of the data distribution problem have been proved to be NP-complete.The selection of an optimal data placement depends on the program structure, the program's data sizes, the compiler capabilities, and some characteristics of the target machine. In addition, there is often a trade-off between minimizing interprocessor data movement and load balancing on processors. Automatic data distribution tools can assist the programmer in the selection of a good data layout strategy. These use to be source-to-source tools which annotate the original program with data distribution directives.Crucial aspects such as data movement, parallelism, and load balance have to be taken into consideration in a unified way to efficiently solve the data distribution problem.In this thesis a framework for automatic data distribution is presented, in the context of a parallelizing environment for massive parallel processor (MPP) systems. The applications considered for parallelization are usually regular problems, in which data structures are dense arrays. The data mapping strategy generated is optimal for a given problem size and target MPP architecture, according to our current cost and compilation model.A single data structure, named Communication-Parallelism Graph (CPG), that holds symbolic information related to data movement and parallelism inherent in the whole program, is the core of our approach. This data structure allows the estimation of the data movement and parallelism effects of any data distribution strategy supported by our model. Assuming that some program characteristics have been obtained by profiling and that some specific target machine features have been provided, the symbolic information included in the CPG can be replaced by constant values expressed in seconds representing data movement time overhead and saving time due to parallelization. The CPG is then used to model a minimal path problem which is solved by a general purpose linear 0-1 integer programming solver. Linear programming techniques guarantees that the solution provided is optimal, and it is highly effcient to solve this kind of problems.The data mapping capabilities provided by the tool includes alignment of the arrays, one or two-dimensional distribution with BLOCK or CYCLIC fashion, a set of remapping actions to be performed between phases if profitable, plus the parallelization strategy associated. The effects of control flow statements between phases are taken into account in order to improve the accuracy of the model. The novelty of the approach resides in handling all stages of the data distribution problem, that traditionally have been treated in several independent phases, in a single step, and providing an optimal solution according to our model.
|
Page generated in 0.0719 seconds