Spelling suggestions: "subject:"distributed demory"" "subject:"distributed amemory""
31 |
Comparison of Shared memory based parallel programming modelsRavela, Srikar Chowdary January 2010 (has links)
Parallel programming models are quite challenging and emerging topic in the parallel computing era. These models allow a developer to port a sequential application on to a platform with more number of processors so that the problem or application can be solved easily. Adapting the applications in this manner using the Parallel programming models is often influenced by the type of the application, the type of the platform and many others. There are several parallel programming models developed and two main variants of parallel programming models classified are shared and distributed memory based parallel programming models. The recognition of the computing applications that entail immense computing requirements lead to the confrontation of the obstacle regarding the development of the efficient programming models that bridges the gap between the hardware ability to perform the computations and the software ability to support that performance for those applications [25][9]. And so a better programming model is needed that facilitates easy development and on the other hand porting high performance. To answer this challenge this thesis confines and compares four different shared memory based parallel programming models with respect to the development time of the application under a shared memory based parallel programming model to the performance enacted by that application in the same parallel programming model. The programming models are evaluated in this thesis by considering the data parallel applications and to verify their ability to support data parallelism with respect to the development time of those applications. The data parallel applications are borrowed from the Dense Matrix dwarfs and the dwarfs used are Matrix-Matrix multiplication, Jacobi Iteration and Laplace Heat Distribution. The experimental method consists of the selection of three data parallel bench marks and developed under the four shared memory based parallel programming models considered for the evaluation. Also the performance of those applications under each programming model is noted and at last the results are used to analytically compare the parallel programming models. Results for the study show that by sacrificing the development time a better performance is achieved for the chosen data parallel applications developed in Pthreads. On the other hand sacrificing a little performance data parallel applications are extremely easy to develop in task based parallel programming models. The directive models are moderate from both the perspectives and are rated in between the tasking models and threading models. / From this study it is clear that threading model Pthreads model is identified as a dominant programming model by supporting high speedups for two of the three different dwarfs but on the other hand the tasking models are dominant in the development time and reducing the number of errors by supporting high growth in speedup for the applications without any communication and less growth in self-relative speedup for the applications involving communications. The degrade of the performance by the tasking models for the problems based on communications is because task based models are designed and bounded to execute the tasks in parallel without out any interruptions or preemptions during their computations. Introducing the communications violates the purpose and there by resulting in less performance. The directive model OpenMP is moderate in both aspects and stands in between these models. In general the directive models and tasking models offer better speedup than any other models for the task based problems which are based on the divide and conquer strategy. But for the data parallelism the speedup growth however achieved is low (i.e. they are less scalable for data parallel applications) are equally compatible in execution times with threading models. Also the development times are considerably low for data parallel applications this is because of the ease of development supported by those models by introducing less number of functional routines required to parallelize the applications. This thesis is concerned about the comparison of the shared memory based parallel programming models in terms of the speedup. This type of work acts as a hand in guide that the programmers can consider during the development of the applications under the shared memory based parallel programming models. We suggest that this work can be extended in two different ways: one is from the developer‘s perspective and the other is a cross-referential study about the parallel programming models. The former can be done by using a similar study like this by a different programmer and comparing this study with the new study. The latter can be done by including multiple data points in the same programming model or by using a different set of parallel programming models for the study. / C/O K. Manoj Kumar; LGH 555; Lindbloms Vägan 97; 37233; Ronneby. Phone no: 0738743400 Home country phone no: +91 9948671552
|
32 |
Automatic code generation and optimization of multi-dimensional stencil computations on distributed-memory architectures / Génération automatique de code et optimisation de calculs stencils sur des architectures à mémoire distribuéeSaied, Mariem 25 September 2018 (has links)
Nous proposons Dido, un langage dédié (DSL) implicitement parallèle qui capture les spécifications de haut niveau des stencils et génère automatiquement du code parallèle de haute performance pour les architectures à mémoire distribuée. Le code généré utilise ORWL en tant que interface de communication et runtime. Nous montrons que Dido réalise un grand progrès en termes de productivité sans sacrifier les performances. Dido prend en charge une large gamme de calculs stencils ainsi que des applications réelles à base de stencils. Nous montrons que le code généré par Dido est bien structuré et se prête à de différentes optimisations possibles. Nous combinons également la technique de génération de code de Dido avec Pluto l'optimiseur polyédrique de boucles pour améliorer la localité des données. Nous présentons des expériences qui prouvent l'efficacité et la scalabilité du code généré qui atteint de meilleures performances que les implémentations ORWL et MPI écrites à la main. / In this work, we present Dido, an implicitly parallel domain-specific language (DSL) that captures high-level stencil abstractions and automatically generates high-performance parallel stencil code for distributed-memory architectures. The generated code uses ORWL as a communication and synchronization backend. We show that Dido achieves a huge progress in terms of programmer productivity without sacrificing the performance. Dido supports a wide range of stencil computations and real-world stencil-based applications. We show that the well-structured code generated by Dido lends itself to different possible optimizations and study the performance of two of them. We also combine Dido's code generation technique with the polyhedral loop optimizer Pluto to increase data locality and improve intra-node data reuse. We present experiments that prove the efficiency and scalability of the generated code that outperforms both ORWL and MPI hand-crafted implementations.
|
33 |
Effiziente parallele Sortier- und Datenumverteilungsverfahren für Partikelsimulationen auf Parallelrechnern mit verteiltem Speicher / Efficient Parallel Sorting and Data Redistribution Methods for Particle Codes on Distributed Memory SystemsHofmann, Michael 16 April 2012 (has links) (PDF)
Partikelsimulationen repräsentieren eine Klasse von daten- und rechenintensiven Simulationsanwendungen, die in unterschiedlichen Bereichen der Wissenschaft und der industriellen Forschung zum Einsatz kommen. Der hohe Berechnungsaufwand der eingesetzten Lösungsmethoden und die großen Datenmengen, die zur Modellierung realistischer Probleme benötigt werden, machen die Nutzung paralleler Rechentechnik hierfür unverzichtbar. Parallelrechner mit verteiltem Speicher stellen dabei eine weit verbreitete Architektur dar, bei der eine Vielzahl an parallel arbeitenden Rechenknoten über ein Verbindungsnetzwerk miteinander Daten austauschen können. Die Berechnung von Wechselwirkungen zwischen Partikeln stellt oft den Hauptaufwand einer Partikelsimulation dar und wird mit Hilfe schneller Lösungsmethoden, wie dem Barnes-Hut-Algorithmus oder der Schnellen Multipolmethode, durchgeführt. Effiziente parallele Implementierungen dieser Algorithmen benötigen dabei eine Sortierung der Partikel nach ihren räumlichen Positionen. Die Sortierung ist sowohl notwendig, um einen effizienten Zugriff auf die Partikeldaten zu erhalten, als auch Teil von Optimierungen zur Erhöhung der Lokalität von Speicherzugriffen, zur Minimierung der Kommunikation und zur Verbesserung der Lastbalancierung paralleler Berechnungen.
Die vorliegende Dissertation beschäftigt sich mit der Entwicklung eines effizienten parallelen Sortierverfahrens und der dafür benötigten Kommunikationsoperationen zur Datenumverteilung in Partikelsimulationen. Hierzu werden eine Vielzahl existierender paralleler Sortierverfahren für verteilten Speicher analysiert und mit den Anforderungen von Seiten der Partikelsimulationsanwendungen verglichen. Besondere Herausforderungen ergeben sich dabei hinsichtlich der Aufteilung der Partikeldaten auf verteilten Speicher, der Gewichtung zu sortierender Daten zur verbesserten Lastbalancierung, dem Umgang mit doppelten Schlüsselwerten sowie der Verfügbarkeit und Nutzung speichereffizienter Kommunikationsoperationen. Um diese Anforderungen zu erfüllen, wird ein neues paralleles Sortierverfahren entwickelt und in die betrachteten Anwendungsprogramme integriert. Darüber hinaus wird ein neuer In-place-Algorithmus für der MPI_Alltoallv-Kommunikationsoperation vorgestellt, mit dem der Speicherverbrauch für die notwendige Datenumverteilung innerhalb der parallelen Sortierung deutlich reduziert werden kann. Das Verhalten aller entwickelten Verfahren wird jeweils isoliert und im praxisrelevanten Einsatz innerhalb verschiedener Anwendungsprogramme und unter Verwendung unterschiedlicher, insbesondere auch hochskalierbarer Parallelrechner untersucht.
|
34 |
Realisierung einer Schedulingumgebung für gemischt-parallele Anwendungen und Optimierung von layer-basierten Schedulingalgorithmen / Development of a scheduling support environment for mixed parallel applications and optimization of layer-based scheduling algorithmsKunis, Raphael 25 January 2011 (has links) (PDF)
Eine Herausforderung der Parallelverarbeitung ist das Erreichen von Skalierbarkeit großer paralleler Anwendungen für verschiedene parallele Systeme. Das zentrale Problem ist, dass die Ausführung einer Anwendung auf einem parallelen System sehr gut sein kann, die Portierung auf ein anderes System in der Regel jedoch zu schlechten Ergebnissen führt.
Durch die Verwendung des Programmiermodells der parallelen Tasks mit Abhängigkeiten kann die Skalierbarkeit für viele parallele Algorithmen
deutlich verbessert werden. Die Programmierung mit parallelen Tasks führt zu Task-Graphen mit Abhängigkeiten zur Darstellung einer parallelen Anwendung, die auch als gemischt-parallele Anwendung bezeichnet wird. Die Grundlage für eine effiziente Abarbeitung einer gemischt-parallelen Anwendung bildet ein geeigneter Schedule, der eine effiziente Abbildung der parallelen Tasks auf die Prozessoren des parallelen Systems vorgibt. Für die Berechnung eines Schedules werden Schedulingalgorithmen eingesetzt.
Ein zentrales Problem bei der Bestimmung eines Schedules für gemischt-parallele Anwendungen besteht darin, dass das Scheduling bereits für Single-Prozessor-Tasks mit Abhängigkeiten und ein paralleles System mit zwei Prozessoren NP-hart ist. Daher existieren lediglich Approximationsalgorithmen und Heuristiken um einen Schedule zu berechnen. Eine Möglichkeit zur Berechnung eines Schedules sind layerbasierte Schedulingalgorithmen. Diese Schedulingalgorithmen bilden zuerst Layer unabhängiger paralleler Tasks und berechnen den Schedule für jeden Layer separat.
Eine Schwachstelle dieser Schedulingalgorithmen ist das Zusammenfügen der einzelnen Schedules zum globalen Schedule. Der vorgestellte Algorithmus Move-blocks bietet eine elegante Möglichkeit das Zusammenfügen zu verbessern. Dies geschieht durch eine Verschmelzung der Schedules aufeinander folgender Layer.
Obwohl eine Vielzahl an Schedulingalgorithmen für gemischt-parallele Anwendungen existiert, gibt es bislang keine umfassende Unterstützung des Schedulings durch Programmierwerkzeuge. Im Besonderen gibt es keine Schedulingumgebung, die eine Vielzahl an Schedulingalgorithmen in sich vereint. Die Vorstellung der flexiblen, komponentenbasierten und erweiterbaren Schedulingumgebung SEParAT ist der zweite Fokus dieser Dissertation. SEParAT unterstützt verschiedene Nutzungsszenarien,
die weit über das reine Scheduling hinausgehen, z.B. den Vergleich von
Schedulingalgorithmen und die Erweiterung und Realisierung neuer Schedulingalgorithmen. Neben der Vorstellung der Nutzungsszenarien werden sowohl die interne Verarbeitung eines Schedulingdurchgangs als auch die komponentenbasierte Softwarearchitektur detailliert vorgestellt.
|
35 |
Seismic modeling and imaging with Fourier method : numerical analyses and parallel implementation strategiesChu, Chunlei, 1977- 13 June 2011 (has links)
Our knowledge of elastic wave propagation in general heterogeneous media with complex geological structures comes principally from numerical simulations. In this dissertation, I demonstrate through rigorous theoretical analyses and comprehensive numerical experiments that the Fourier method is a suitable method of choice for large scale 3D seismic modeling and imaging problems, due to its high accuracy and computational efficiency. The most attractive feature of the Fourier method is its ability to produce highly accurate solutions on relatively coarser grids, compared with other numerical methods for solving wave equations. To further advance the Fourier method, I identify two aspects of the method to focus on in this work, i.e., its implementation on modern clusters of computers and efficient high-order time stepping schemes. I propose two new parallel algorithms to improve the efficiency of the Fourier method on distributed memory systems using MPI. The first algorithm employs non-blocking all-to-all communications to optimize the conventional parallel Fourier modeling workflows by overlapping communication with computation. With a carefully designed communication-computation overlapping mechanism, a large amount of communication overhead can be concealed when implementing different kinds of wave equations. The second algorithm combines the advantages of both the Fourier method and the finite difference method by using convolutional high-order finite difference operators to evaluate the spatial derivatives in the decomposed direction. The high-order convolutional finite difference method guarantees a satisfactory accuracy and provides the flexibility of using non-blocking point-to-point communications for efficient interprocessor data exchange and the possibility of overlapping communication and computation. As a result, this hybrid method achieves an optimized balance between numerical accuracy and computational efficiency. To improve the overall accuracy of time domain Fourier simulations, I propose a family of new high-order time stepping schemes, based on a novel algorithm for designing time integration operators, to reduce temporal derivative discretization errors in a cost-effective fashion. I explore the pseudo-analytical method and propose high-order formulations to further improve its accuracy and ability to deal with spatial heterogeneities. I also extend the pseudo-analytical method to solve the variable-density acoustic and elastic wave equations. I thoroughly examine the finite difference method by conducting complete numerical dispersion and stability analyses. I comprehensively compare the finite difference method with the Fourier method and provide a series of detailed benchmarking tests of these two methods under a number of different simulation configurations. The Fourier method outperforms the finite difference method, in terms of both accuracy and efficiency, for both the theoretical studies and the numerical experiments, which provides solid evidence that the Fourier method is a superior scheme for large scale seismic modeling and imaging problems. / text
|
36 |
Optimization of memory management on distributed machineHa, Viet Hai 05 October 2012 (has links) (PDF)
In order to explore further the capabilities of parallel computing architectures such as grids, clusters, multi-processors and more recently, clouds and multi-cores, an easy-to-use parallel language is an important challenging issue. From the programmer's point of view, OpenMP is very easy to use with its ability to support incremental parallelization, features for dynamically setting the number of threads and scheduling strategies. However, as initially designed for shared memory systems, OpenMP is usually limited on distributed memory systems to intra-nodes' computations. Many attempts have tried to port OpenMP on distributed systems. The most emerged approaches mainly focus on exploiting the capabilities of a special network architecture and therefore cannot provide an open solution. Others are based on an already available software solution such as DMS, MPI or Global Array and, as a consequence, they meet difficulties to become a fully-compliant and high-performance implementation of OpenMP. As yet another attempt to built an OpenMP compliant implementation for distributed memory systems, CAPE − which stands for Checkpointing Aide Parallel Execution − has been developed which with the following idea: when reaching a parallel section, the master thread is dumped and its image is sent to slaves; then, each slave executes a different thread; at the end of the parallel section, slave threads extract and return to the master thread the list of all modifications that has been locally performed; the master includes these modifications and resumes its execution. In order to prove the feasibility of this paradigm, the first version of CAPE was implemented using complete checkpoints. However, preliminary analysis showed that the large amount of data transferred between threads and the extraction of the list of modifications from complete checkpoints lead to weak performance. Furthermore, this version was restricted to parallel problems satisfying the Bernstein's conditions, i.e. it did not solve the requirements of shared data. This thesis aims at presenting the approaches we proposed to improve CAPE' performance and to overcome the restrictions on shared data. First, we developed DICKPT which stands for Discontinuous Incremental Checkpointing, an incremental checkpointing technique that supports the ability to save incremental checkpoints discontinuously during the execution of a process. Based on the DICKPT, the execution speed of the new version of CAPE was significantly increased. For example, the time to compute a large matrix-matrix product on a desktop cluster has become very similar to the execution time of the same optimized MPI program. Moreover, the speedup associated with this new version for various number of threads is quite linear for different problem sizes. In the side of shared data, we proposed UHLRC, which stands for Updated Home-based Lazy Release Consistency, a modified version of the Home-based Lazy Release Consistency (HLRC) memory model, to make it more appropriate to the characteristics of CAPE. Prototypes and algorithms to implement the synchronization and OpenMP data-sharing clauses and directives are also specified. These two works ensures the ability for CAPE to respect shared-data behavior
|
37 |
Who is the cowboy in Washington?: beating google at their own game with neuroscience and cryptographyKogeyama, Renato 17 December 2014 (has links)
Submitted by RENATO Kogeyama (rkogeyama@gmail.com) on 2015-03-06T14:50:01Z
No. of bitstreams: 1
Dissertação final.pdf: 1794273 bytes, checksum: b90c57e65dc2272d6edcdbabe5703b90 (MD5) / Approved for entry into archive by Janete de Oliveira Feitosa (janete.feitosa@fgv.br) on 2015-03-10T12:44:03Z (GMT) No. of bitstreams: 1
Dissertação final.pdf: 1794273 bytes, checksum: b90c57e65dc2272d6edcdbabe5703b90 (MD5) / Approved for entry into archive by Marcia Bacha (marcia.bacha@fgv.br) on 2015-03-12T19:58:58Z (GMT) No. of bitstreams: 1
Dissertação final.pdf: 1794273 bytes, checksum: b90c57e65dc2272d6edcdbabe5703b90 (MD5) / Made available in DSpace on 2015-03-12T19:59:13Z (GMT). No. of bitstreams: 1
Dissertação final.pdf: 1794273 bytes, checksum: b90c57e65dc2272d6edcdbabe5703b90 (MD5)
Previous issue date: 2014-12-17 / Who was the cowboy in Washington? What is the land of sushi? Most people would have answers to these questions readily available,yet, modern search engines, arguably the epitome of technology in finding answers to most questions, are completely unable to do so. It seems that people capture few information items to rapidly converge to a seemingly 'obvious' solution. We will study approaches for this problem, with two additional hard demands that constrain the space of possible theories: the sought model must be both psychologically and neuroscienti cally plausible. Building on top of the mathematical model of memory called Sparse Distributed Memory, we will see how some well-known methods in cryptography can point toward a promising, comprehensive, solution that preserves four crucial properties of human psychology.
|
38 |
Um agente jogador de GO com busca em árvore Monte-Carlo aprimorada por memória esparsamente distribuídaAguiar, Matheus Araújo 04 November 2013 (has links)
The game of Go is very ancient, with more than 4000 years of history and it is still
popular nowadays, representing a big challenge to the Articial Intelligence. Despite its
simple rules, the techniques which obtained success in other games like chess and draughts
cannot handle satisfactorily the complex patterns and behaviours that emerge during a
match of Go. The present work implements the SDM-Go, a competitive agent for Go that
seeks to reduce the usage of supervision in the search process for the best move. The SDMGo
utilizes the sparse distributed memory model as an additional resource to the Monte-
Carlo tree search, which is used by many of the best automatic Go players nowadays.
Based upon the open-source player Fuego, the use of the sparse distributed by SDM-Go has
the purpose of being an alternative to the strong supervised process used by Fuego. The
Monte-Carlo tree search executed by agent Fuego uses a set of heuristics codied by human
professionals to guide the simulations and also to evaluate new nodes found in the tree. In
a dierent way, SDM-Go implements a non-supervised and domain independent approach,
where the history of the values of board states previously visited during the search are
used to evaluate new boards (nodes of the search tree). In this way, SDM-Go reduces the
supervision of Fuego, substituting its heuristics by the sparse distributed memory, which
works as a repository for the information from the history of visited board states. Thus,
the contributions of SDM-Go consist of: (1) the utilization of a sparse distributed memory
to substitute the supervised approach of Fuego to evaluate new nodes found in the search
tree; (2) the implementation of a board state representation based on bit vectors, in order
to not compromise the performance of the system due to the boards stored in the memory;
(3) the extension of the usage of the Monte-Carlo simulation results to update the values
of the board states stored in the memory. Distinctly from many other existing agents,
the use of the sparse distributed memory represents an approach independent of domain.
The results obtained in tournaments against the well known open-source agent Fuego
show that SDM-Go can perform successfully the task of providing a non-supervised and
independent of domain approach to evaluate new nodes found in the search tree. Despite
the longer runtime required by the use of the sparse distributed memory, the core of the
agent performance, SDM-Go can keep a competitive level of play, especially at the 9X9
board. / Com mais de 4000 anos de história, o jogo de Go é atualmente um dos mais populares
jogos de tabuleiro e representa um grande desao para a Inteligência Articial. Apesar de
suas regras simples, as técnicas que anteriormente obtiveram sucesso em outros jogos como
xadrez e damas não conseguem lidar satisfatoriamente com os padrões e comportamentos
complexos que emergem durante uma partida de Go. O presente trabalho implementa
o SDM-Go, um agente jogador de Go competitivo que procura reduzir a utilização de
supervisão no processo de busca pelo melhor movimento. O SDM-Go emprega o modelo
de memória esparsamente distribuída como um recurso adicional à busca em árvore Monte-
Carlo utilizada por muitos dos melhores agentes automáticos atuais. Baseado no jogador
código-aberto Fuego o uso da memória esparsamente distribuída pelo SDM-Go tem como
objetivo ser uma alternativa ao processo fortemente supervisionado utilizado por aquele
agente. A busca em árvore Monte-Carlo executada pelo jogador Fuego utiliza um conjunto
de heurísticas codicadas por prossionais humanos para guiar as simulações e também
avaliar novos nós encontrados na árvore. De maneira distinta, o SDM-Go implementa uma
abordagem não supervisionada e independente de domínio, onde o histórico dos valores
dos estados de tabuleiros previamente visitados durante a busca são utilizados para avaliar
novos estados de tabuleiro (nós da árvore de busca). Desta maneira, o SDM-Go reduz a
supervisão do agente Fuego, substituindo as heurísticas deste pela memória esparsamente
distribuída que funciona como repositório das informações do histórico de estados de
tabuleiro visitados. Assim, as contribuições do SDM-Go consistem em: (1) a utilização
de uma memória esparsamente distribuída para substituir a abordagem supervisionada
do Fuego para avaliar previamente novos nós encontrados na árvore; (2) a implementação
de uma representação de tabuleiro baseada em vetores de bits, para não comprometer o
desempenho do sistema em função dos tabuleiros armazenados na memória; (3) a extensão
da utilização dos resultados das simulações Monte-Carlo para atualizar os valores dos
tabuleiros armazenados na memória. Diferentemente de muitos outros agentes atuais,
o uso da memória esparsamente distribuída representa uma abordagem independente
de domínio. Os resultados obtidos em torneios contra o conhecido agente código-aberto
Fuego mostram que o SDM-Go consegue desempenhar com sucesso a tarefa de prover uma
abordagem independente de domínio e não supervisionada para avaliar previamente novos
nós encontrados na árvore de busca. Apesar do maior tempo de processamento requerido
pela utilização da memória esparsamente distribuída, peça central para o desempenho
do agente, o SDM-Go consegue manter um nível de jogo competitivo, principalmente no
tabuleiro 9X9. / Mestre em Ciência da Computação
|
39 |
Implementação paralela em um ambiente de múltiplas GPUs de um modelo 3D do sistema imune inatoXavier, Micael Peters 26 August 2013 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-02-24T13:29:14Z
No. of bitstreams: 1
micaelpetersxavier.pdf: 17481766 bytes, checksum: fb76bff140085a73dc148ca7493df8b3 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-02-24T15:36:12Z (GMT) No. of bitstreams: 1
micaelpetersxavier.pdf: 17481766 bytes, checksum: fb76bff140085a73dc148ca7493df8b3 (MD5) / Made available in DSpace on 2017-02-24T15:36:12Z (GMT). No. of bitstreams: 1
micaelpetersxavier.pdf: 17481766 bytes, checksum: fb76bff140085a73dc148ca7493df8b3 (MD5)
Previous issue date: 2013-08-26 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / O desenvolvimento de sistemas computacionais que simulam o funcionamento de tecidos
ou mesmo de órgãos completos é uma tarefa extremamente complexa. Um dos
muitos obstáculos relacionados ao desenvolvimento de tais sistemas é o enorme poder
computacional necessário para a execução das simulações. Por essa razão, o uso de
estratégias e métodos que empregam computação paralela são essenciais. Este trabalho
foca na simulação temporal e espacial, em uma seção tridimensional de tecido, do
comportamento de algumas das células e moléculas que constituem o sistema imunológico
humano (SIH) inato. Com o objetivo de reduzir o tempo necessário para realizar
a simulação, foram utilizadas múltiplas unidades de processamento gráfico (Graphics
Processing Unit, GPUs) em um ambiente de agregados computacionais. Apesar do alto
custo de comunicação imposto pelo uso de múltiplas GPUs, as abordagens e técnicas
utilizadas neste trabalho para implementar as versões paralelas do simulador mostraram-se
efetivas para alcançar o objetivo de redução do tempo de simulação. / The development of computer systems that simulate the behavior of tissues or even
whole organs is an extremely complex task. One of the many obstacles related to the
development of such systems is the huge computational resources needed to execute the
simulations. For this reason, the use of strategies and methods that employ parallel
computing are essential. This work focuses on the spatial-temporal simulation of some
human innate immune system (HIS) cells and molecules in a three-dimensional section
of tissue. Aiming to reduce the time required to perform the simulation, multiple
graphics processing units (GPUs) were used in a cluster environment. Despite of high
communication cost imposed by the use of multiple GPUs, the approaches and techniques
used in this work to implement parallel versions of the simulator proved to be very effective
in their purpose of reducing the simulation time.
|
40 |
Effective Automatic Computation Placement and Data Allocation for Parallelization of Regular ProgramsChandan, G January 2014 (has links) (PDF)
Scientific applications that operate on large data sets require huge amount of computation power and memory. These applications are typically run on High Performance Computing (HPC) systems that consist of multiple compute nodes, connected over an network interconnect such as InfiniBand. Each compute node has its own memory and does not share the address space with other nodes. A significant amount of work has been done in past two decades on parallelizing for distributed-memory architectures. A majority of this work was done in developing compiler technologies such as high performance Fortran (HPF) and partitioned global address space (PGAS). However, several steps involved in achieving good performance remained manual. Hence, the approach currently used to obtain the best performance is to rely on highly tuned libraries such as ScaLAPACK. The objective of this work is to improve automatic compiler and runtime support for distributed-memory clusters for regular programs. Regular programs typically use arrays as their main data structure and array accesses are affine functions of outer loop indices and program parameters. A lot of scientific applications such as linear-algebra kernels, stencils, partial differential equation solvers, data-mining applications and dynamic programming codes fall in this category.
In this work, we propose techniques for finding computation mapping and data allocation when compiling regular programs for distributed-memory clusters. Techniques for transformation and detection of parallelism, relying on the polyhedral framework already exist. We propose automatic techniques to determine computation placements for identified parallelism and allocation of data. We model the problem of finding good computation placement as a graph partitioning problem with the constraints to minimize both communication volume and load imbalance for entire program. We show that our approach for computation mapping is more effective than those that can be developed using vendor-supplied libraries. Our approach for data allocation is driven by tiling of data spaces along with a compiler assisted runtime scheme to allocate and deallocate tiles on-demand and reuse them. Experimental results on some sequences of BLAS calls demonstrate a mean speedup of 1.82× over versions written with ScaLAPACK. Besides enabling weak scaling for distributed memory, data tiling also improves locality for shared-memory parallelization. Experimental results on a 32-core shared-memory SMP system shows a mean speedup of 2.67× over code that is not data tiled.
|
Page generated in 0.0749 seconds