Spelling suggestions: "subject:"data parallelism"" "subject:"data arallelism""
1 |
Linking Scheme code to data-parallel CUDA-C code2013 December 1900 (has links)
In Compute Unified Device Architecture (CUDA), programmers must manage memory operations, synchronization, and utility functions of Central Processing Unit programs that control and issue data-parallel general purpose programs running on a Graphics Processing Unit (GPU). NVIDIA Corporation developed the CUDA framework to enable and develop data-parallel programs for GPUs to accelerate scientific and engineering applications by providing a language extension of C called CUDA-C. A foreign-function interface comprised of Scheme and CUDA-C constructs extends the Gambit Scheme compiler and enables linking of Scheme and data-parallel CUDA-C code to support high-performance parallel computation with reasonably low overhead in runtime. We provide six test cases — implemented both in Scheme and CUDA-C — in order to evaluate performance of our implementation in Gambit and to show 0–35% overhead in the usual case. Our work enables Scheme programmers to develop expressive programs that control and issue data-parallel programs running on GPUs, while also reducing hands-on memory management.
|
2 |
An interprocedural framework for data redistributions in distributed memory machinesKrishnamurthy, Sudha January 1996 (has links)
No description available.
|
3 |
Evaluations of the parallel extensions in .NET 4.0Islam, Md. Rashedul, Islam, Md. Rofiqul, Mazumder, Tahidul Arafhin January 2011 (has links)
Parallel programming or making parallel application is a great challenging part of computing research. The main goal of parallel programming research is to improve performance of computer applications. A well-structured parallel application can achieve better performance in terms of execution speed over sequential execution on existing and upcoming parallel computer architecture. This thesis named "Evaluations of the parallel extensions in .NET 4.0" describes the experimental evaluation of different parallel application performance with thread-safe data structure and parallel constructions in .NET Framework 4.0. Described different performance issues of this thesis help to make efficient parallel application for better performance. Before describing the experimental evaluation, this thesis describes some methodologies relevant to parallel programming like Parallel computer architecture, Memory architectures, Parallel programming models, decomposition, threading etc. It describes the different APIs in .NET Framework 4.0 and the way of coding for making an efficient parallel application in different situations. It also presents some implementations of different parallel constructs or APIs like Static Multithreading, Using ThreadPool, Task, Parallel.For, Parallel.ForEach, PLINQ etc. The evaluation of parallel application has been done by experimental result evaluation and performance measurements. In most of the cases, the result evaluation shows better performance of parallelism like less execution time and increase CPU uses over traditional sequential execution. In addition parallel loop doesn’t show better performance in case of improper partitioning, oversubscription, improper workloads etc. The discussion about proper partitioning, oversubscription and proper work load balancing will help to make more efficient parallel application. / Program: Magisterutbildning i informatik
|
4 |
Um método para paralelização automática de workflows intensivos em dados / A method for automatic paralelization of data-intensive workflowsWatanabe, Elaine Naomi 22 May 2017 (has links)
A análise de dados em grande escala é um dos grandes desafios computacionais atuais e está presente não somente em áreas da ciência moderna mas também nos setores público e industrial. Nesses cenários, o processamento dos dados geralmente é modelado como um conjunto de atividades interligadas por meio de fluxos de dados os workflows. Devido ao alto custo computacional, diversas estratégias já foram propostas para melhorar a eficiência da execução de workflows intensivos em dados, tais como o agrupamento de atividades para minimizar as transferências de dados e a paralelização do processamento, de modo que duas ou mais atividades sejam executadas ao mesmo tempo em diferentes recursos computacionais. O paralelismo nesse caso é definido pela estrutura descrita em seu modelo de composição de atividades. Em geral, os Sistemas de Gerenciamento de Workflows, responsáveis pela coordenação e execução dessas atividades em um ambiente distribuído, desconhecem o tipo de processamento a ser realizado e por isso não são capazes de explorar automaticamente estratégias para execução paralela. As atividades paralelizáveis são definidas pelo usuário em tempo de projeto e criar uma estrutura que faça uso eficiente de um ambiente distribuído não é uma tarefa trivial. Este trabalho tem como objetivo prover execuções mais eficientes de workflows intensivos em dados e propõe para isso um método para a paralelização automática dessas aplicações, voltado para usuários não-especialistas em computação de alto desempenho. Este método define nove anotações semânticas para caracterizar a forma como os dados são acessados e consumidos pelas atividades e, assim, levando em conta os recursos computacionais disponíveis para a execução, criar automaticamente estratégias que explorem o paralelismo de dados. O método proposto gera réplicas das atividades anotadas e define também um esquema de indexação e distribuição dos dados do workflow que possibilita maior acesso paralelo. Avaliou-se sua eficiência em dois modelos de workflows com dados reais, executados na plataforma de nuvem da Amazon. Usou-se um SGBD relacional (PostgreSQL) e um NoSQL (MongoDB) para o gerenciamento de até 20,5 milhões de objetos de dados em 21 cenários com diferentes configurações de particionamento e replicação de dados. Os resultados obtidos mostraram que a paralelização da execução das atividades promovida pelo método reduziu o tempo de execução do workflow em até 66,6% sem aumentar o seu custo monetário. / The analysis of large-scale datasets is one of the major current computational challenges and it is present not only in fields of modern science domain but also in the industry and public sector. In these scenarios, the data processing is usually modeled as a set of activities interconnected through data flows as known as workflows. Due to their high computational cost, several strategies were proposed to improve the efficiency of data-intensive workflows, such as activities clustering to minimize data transfers and parallelization of data processing for reducing makespan, in which two or more activities are performed at same time on different computational resources. The parallelism, in this case, is defined in the structure of the workflows model of activities composition. In general, Workflow Management Systems are responsible for the coordination and execution of these activities in a distributed environment. However, they are not aware of the type of processing that will be performed by each one of them. Thus, they are not able to automatically explore strategies for parallel execution. Parallelizable activities are defined by user at workflow design time and creating a structure that makes an efficient use of a distributed environment is not a trivial task. This work aims to provide more efficient executions for data intensive workflows and, for that, proposes a method for automatic parallelization of these applications, focusing on users who are not specialists in high performance computing. This method defines nine semantic annotations to characterize how data is accessed and consumed by activities and thus, taking into account the available computational resources, automatically creates strategies that explore data parallelism. The proposed method generates replicas of annotated activities. It also defines a workflow data indexing and distribution scheme that allows greater parallel access. Its efficiency was evaluated in two workflow models with real data, executed in Amazon cloud platform. A relational (PostgreSQL) and a NoSQL (MongoDB) DBMS were used to manage up to 20.5 million of data objects in 21 scenarios with different partitioning and data replication settings. The experiments have shown that the parallelization of the execution of the activities promoted by the method resulted in a reduction of up to 66.6 % in the workflows makespan without increasing its monetary cost.
|
5 |
[en] SUPPORT INTEGRATION OF DYNAMIC WORKLOAD GENERATION TO SAMBA FRAMEWORK / [pt] INTEGRAÇÃO DE SUPORTE PARA GERAÇÃO DE CARGA DINÂMICA AO AMBIENTE DE DESENVOLVIMENTO SAMBASERGIO MATEO BADIOLA 25 October 2005 (has links)
[pt] Alexandre Plastino em sua tese de doutorado apresenta um
ambiente de
desenvolvimento de aplicações paralelas SPMD (Single
Program, Multiple Data)
denominado SAMBA que permite a geração de diferentes
versões de uma
aplicação paralela a partir da incorporação de diferentes
algoritmos de
balanceamento de carga disponíveis numa biblioteca
própria. O presente trabalho
apresenta uma ferramenta de geração de carga dinâmica
integrada a este ambiente
que possibilita criar, em tempo de execução, diferentes
perfis de carga externa a
serem aplicados a uma aplicação paralela em estudo. Dessa
forma, pretende-se
permitir que o desenvolvedor de uma aplicação paralela
possa selecionar o
algoritmo de balanceamento de carga mais apropriado frente
a condições variáveis
de carga externa. Com o objetivo de validar a integração
da ferramenta ao
ambiente SAMBA, foram obtidos resultados da execução de
duas aplicações
SPMD distintas. / [en] Alexandre Plastino s tesis presents a framework for the
development of
SPMD parallel applications, named SAMBA, that enables the
generation of
different versions of a parallel application by
incorporating different load
balancing algorithms from an internal library. This
dissertation presents a dynamic
workload generation s tool, integrated to SAMBA, that
affords to create, at
execution time, different external workload profiles to be
applied over a parallel
application in study. The objective is to enable that a
parallel application
developer selects the most appropriated load balancing
algorithm based in its
performance under variable conditions of external
workload. In order to validate
this integration, two SPMD applications were implemented.
|
6 |
Implementation of Data Parallel Primitives on MIMD Shared Memory SystemsMortensen, Christian January 2019 (has links)
This thesis presents an implementation of a multi-threaded C library for performing data parallel computations on MIMD shared memory systems, with support for user defined operators and one-dimensional sparse arrays. Multi-threaded parallel execution was achieved by the use of the POSIX threads, and the library exposes several functions for performing data parallel computations directly on arrays. The implemented functions were based on a set of primitives that many data parallel programming languages have in common. The individual scalability of the primitives varied greatly, with most of them only gaining a significant speedup when executed on two cores followed by a significant drop-off in speedup as more cores were added. An exception to this was the reduction primitive however, which managed to achieve near optimal speedup in most tests. The library proved unviable for expressing algorithms requiring more then one or two primitives in sequence due to the overhead that each of them cause.
|
7 |
Directive-based General-purpose GPU ProgrammingHan, Tian Yi David 19 January 2010 (has links)
Graphics Processing Units (GPUs) have become a competitive accelerator for
non-graphics applications, mainly driven by the improvements in GPU programmability. Although the Compute Unified Device Architecture (CUDA) is a simple C-like interface for programming NVIDIA GPUs, porting applications to CUDA remains a challenge to average programmers. In particular, CUDA places on the programmer the burden of packaging GPU code in separate functions, of explicitly managing data transfer between the host and GPU memories, and of manually optimizing the utilization of the GPU memory. We have designed hiCUDA, a high-level directive-based language for CUDA programming. It allows programmers to perform these tedious tasks in a simpler manner, and directly to the sequential code. We have also prototyped a compiler that translates a hiCUDA program to a CUDA program and can handle real-world applications.
Experiments using seven standard CUDA benchmarks show that the simplicity hiCUDA provides comes at no expense to performance.
|
8 |
Directive-based General-purpose GPU ProgrammingHan, Tian Yi David 19 January 2010 (has links)
Graphics Processing Units (GPUs) have become a competitive accelerator for
non-graphics applications, mainly driven by the improvements in GPU programmability. Although the Compute Unified Device Architecture (CUDA) is a simple C-like interface for programming NVIDIA GPUs, porting applications to CUDA remains a challenge to average programmers. In particular, CUDA places on the programmer the burden of packaging GPU code in separate functions, of explicitly managing data transfer between the host and GPU memories, and of manually optimizing the utilization of the GPU memory. We have designed hiCUDA, a high-level directive-based language for CUDA programming. It allows programmers to perform these tedious tasks in a simpler manner, and directly to the sequential code. We have also prototyped a compiler that translates a hiCUDA program to a CUDA program and can handle real-world applications.
Experiments using seven standard CUDA benchmarks show that the simplicity hiCUDA provides comes at no expense to performance.
|
9 |
Um método para paralelização automática de workflows intensivos em dados / A method for automatic paralelization of data-intensive workflowsElaine Naomi Watanabe 22 May 2017 (has links)
A análise de dados em grande escala é um dos grandes desafios computacionais atuais e está presente não somente em áreas da ciência moderna mas também nos setores público e industrial. Nesses cenários, o processamento dos dados geralmente é modelado como um conjunto de atividades interligadas por meio de fluxos de dados os workflows. Devido ao alto custo computacional, diversas estratégias já foram propostas para melhorar a eficiência da execução de workflows intensivos em dados, tais como o agrupamento de atividades para minimizar as transferências de dados e a paralelização do processamento, de modo que duas ou mais atividades sejam executadas ao mesmo tempo em diferentes recursos computacionais. O paralelismo nesse caso é definido pela estrutura descrita em seu modelo de composição de atividades. Em geral, os Sistemas de Gerenciamento de Workflows, responsáveis pela coordenação e execução dessas atividades em um ambiente distribuído, desconhecem o tipo de processamento a ser realizado e por isso não são capazes de explorar automaticamente estratégias para execução paralela. As atividades paralelizáveis são definidas pelo usuário em tempo de projeto e criar uma estrutura que faça uso eficiente de um ambiente distribuído não é uma tarefa trivial. Este trabalho tem como objetivo prover execuções mais eficientes de workflows intensivos em dados e propõe para isso um método para a paralelização automática dessas aplicações, voltado para usuários não-especialistas em computação de alto desempenho. Este método define nove anotações semânticas para caracterizar a forma como os dados são acessados e consumidos pelas atividades e, assim, levando em conta os recursos computacionais disponíveis para a execução, criar automaticamente estratégias que explorem o paralelismo de dados. O método proposto gera réplicas das atividades anotadas e define também um esquema de indexação e distribuição dos dados do workflow que possibilita maior acesso paralelo. Avaliou-se sua eficiência em dois modelos de workflows com dados reais, executados na plataforma de nuvem da Amazon. Usou-se um SGBD relacional (PostgreSQL) e um NoSQL (MongoDB) para o gerenciamento de até 20,5 milhões de objetos de dados em 21 cenários com diferentes configurações de particionamento e replicação de dados. Os resultados obtidos mostraram que a paralelização da execução das atividades promovida pelo método reduziu o tempo de execução do workflow em até 66,6% sem aumentar o seu custo monetário. / The analysis of large-scale datasets is one of the major current computational challenges and it is present not only in fields of modern science domain but also in the industry and public sector. In these scenarios, the data processing is usually modeled as a set of activities interconnected through data flows as known as workflows. Due to their high computational cost, several strategies were proposed to improve the efficiency of data-intensive workflows, such as activities clustering to minimize data transfers and parallelization of data processing for reducing makespan, in which two or more activities are performed at same time on different computational resources. The parallelism, in this case, is defined in the structure of the workflows model of activities composition. In general, Workflow Management Systems are responsible for the coordination and execution of these activities in a distributed environment. However, they are not aware of the type of processing that will be performed by each one of them. Thus, they are not able to automatically explore strategies for parallel execution. Parallelizable activities are defined by user at workflow design time and creating a structure that makes an efficient use of a distributed environment is not a trivial task. This work aims to provide more efficient executions for data intensive workflows and, for that, proposes a method for automatic parallelization of these applications, focusing on users who are not specialists in high performance computing. This method defines nine semantic annotations to characterize how data is accessed and consumed by activities and thus, taking into account the available computational resources, automatically creates strategies that explore data parallelism. The proposed method generates replicas of annotated activities. It also defines a workflow data indexing and distribution scheme that allows greater parallel access. Its efficiency was evaluated in two workflow models with real data, executed in Amazon cloud platform. A relational (PostgreSQL) and a NoSQL (MongoDB) DBMS were used to manage up to 20.5 million of data objects in 21 scenarios with different partitioning and data replication settings. The experiments have shown that the parallelization of the execution of the activities promoted by the method resulted in a reduction of up to 66.6 % in the workflows makespan without increasing its monetary cost.
|
10 |
A Skeleton Programming Library for Multicore CPU and Multi-GPU SystemsEnmyren, Johan January 2010 (has links)
This report presents SkePU, a C++ template library which provides a simple and unified interface for specifying data-parallel computations with the help of skeletons on GPUs using CUDA and OpenCL. The interface is also general enough to support other architectures, and SkePU implements both a sequential CPU and a parallel OpenMP back end. It also supports multi-GPU systems. Benchmarks show that copying data between the host and the GPU is often a bottleneck. Therefore a container which uses lazy memory copying has been implemented to avoid unnecessary memory transfers. SkePU was evaluated with small benchmarks and a larger application, a Runge-Kutta ODE solver. The results show that skeletal parallel programming is indeed a viable approach for GPU Computing and that a generalized interface for multiple back ends is also reasonable. The best performance gains are received when the computation load is large compared to memory I/O (the lazy memory copying can help to achieve this). We see that SkePU offers good performance with a more complex and realistic task such as ODE solving, with up to ten times faster run times when using SkePU with a GPU back end compared to a sequential solver running on a fast CPU. From the benchmarks we can conclude that skeletal parallel programming is indeed a viable approach for GPU Computing and that a generalized interface for multiple back ends is also reasonable. SkePU does however have some disadvantages too; there is some overhead in using the library which we can see from the dot product and LibSolve benchmarks. Although not big, it is still there and if performance is of uttermost importance, then a hand coded solution would be best. One cannot express all calculations in terms of skeletons either, if one have such a problem, specialized routines must still be created.
|
Page generated in 0.0661 seconds