Spelling suggestions: "subject:"arallel computing"" "subject:"aparallel computing""
21 |
Parallel Islands: A Diversity Aware Tool For Parallel Computing EducationCameron, Melissa 21 August 2023 (has links)
The rise in multiprocessors has led to the incorporation of parallel processing in virtually all segments of industry. Creation of and maintenance for the software to run these systems, as well as for the applications using these systems, requires extensive knowledge of the concepts and skills of parallel and distributed computing (PDC). This will naturally lead to an increase in the demand for software developers familiar with PDC and an increase in the demand for universities to incorporate PDC concepts into their curricula. Because there is a perceived difficulty in teaching PDC concepts, particularly early in the Computer Science (CS) curriculum there is a need to produce educational materials to assist with this expansion. At the same time CS education is wrestling with the surge in the need for graduates with PDC skills, it is also attempting to overcome a gender imbalance in CS.
The necessity to create the materials required for increasing PDC education provides an opportunity to make strides in increasing diversity in CS as well. Therefore, Parallel Islands was created as a tool to aid in introducing PDC concepts in introductory CS courses in a manner that appeals to a wide diversity of students. / Master of Science / The rise in multiprocessors has led to the incorporation of parallel processing in virtually all segments of industry. Creation of and maintenance for the software to run these systems, as well as for the applications using these systems, requires extensive knowledge of the concepts and skills of parallel and distributed computing (PDC). This will naturally lead to an increase in the demand for software developers familiar with PDC and an increase in the demand for universities to incorporate PDC concepts into their curricula. Because there is a perceived difficulty in teaching PDC concepts, particularly early in the Computer Science (CS) curriculum there is a need to produce educational materials to assist with this expansion. At the same time CS education is wrestling with the surge in the need for graduates with PDC skills, it is also attempting to overcome a gender imbalance in CS.
The necessity to create the materials required for increasing PDC education provides an opportunity to make strides in increasing diversity in CS as well. Therefore, Parallel Islands was created as a tool to aid in introducing PDC concepts in introductory CS courses in a manner that appeals to a wide diversity of students.
|
22 |
Aspects of practical implementations of PRAM algorithmsRavindran, Somasundaram January 1993 (has links)
No description available.
|
23 |
A general cellular system for the design of multiprocessor architecturesBarrall, G. S. January 1994 (has links)
No description available.
|
24 |
The theory and applications of ringtree networksXie, Hong January 1994 (has links)
No description available.
|
25 |
GPU-Based Acceleration on ACEnet for FDTD Method of Electromagnetic Field AnalysisSun, Dachuan 21 November 2013 (has links)
Graphics Processing Unit (GPU) programming techniques have been applied to a range of scientific and engineering computations. In computational electromagnetics, uses of the GPU technique have dramatically increased since the release of NVIDIA’s Compute Unified Device Architecture (CUDA), a powerful and simple-to-use programmer environment that renders GPU computing easy accessibility to developers not specialized in computer graphics.
The focus of recent research has been on problems concerning the Finite-Difference Time-Domain (FDTD) simulation of electromagnetic (EM) fields. Traditional FDTD methods sometimes run slowly due to large memory and CPU requirements for modeling electrically large structures. Acceleration methods such as parallel programming are then needed. FDTD algorithm is suitable for multi-thread parallel computation with GPU. For complex structures and procedures, high performance GPU calculation algorithms will be crucial.
In this work, we present the implementation of GPU programming for acceleration of computations for EM engineering problems. The speed-up is demonstrated through a few simulations with inexpensive GPUs and ACEnet, and the attainable efficiency is illustrated with numerical results. Using C, CUDA C, Matlab GPU, and ACEnet, we make comparisons between serial and parallel algorithms and among computations with and without GPU and CUDA, different types of GPUs, and personal computers and ACEnet. A maximum of 26.77 times of speed-up is achieved, which could be further boosted with development of new hardware in the future. The acceleration in run time will make many investigations possible and will pave the way for studies of large-scale computational electromagnetic problems that were previously impractical. This is a field that definitely invites more in-depth studies. / This is the thesis of my Master of Applied Science work at Dalhousie University.
|
26 |
Reduction of co-simulation runtime through parallel processingCoutu, Jason Dean 10 September 2009
During the design phase of modern digital and mixed signal devices, simulations are run to determine the fitness of the proposed design. Some of these simulations can take large amounts of time, thus slowing down the time to manufacture of the system prototype. One of the typical simulations that is done is an integration simulation that simulates the hardware and software at the same time. Most simulators used in this task are monolithic simulators. Some simulators do have the ability to have external libraries and simulators interface with it, but the setup can be a tedious task. This thesis proposes, implements and evaluates a distributed simulator called PDQScS, that allows for speed up of the simulation to reduce this bottleneck in the design cycle without the tedious separation and linking by the user. Using multiple processes and SMP machines a simulation run time reduction was found.
|
27 |
Reduction of co-simulation runtime through parallel processingCoutu, Jason Dean 10 September 2009 (has links)
During the design phase of modern digital and mixed signal devices, simulations are run to determine the fitness of the proposed design. Some of these simulations can take large amounts of time, thus slowing down the time to manufacture of the system prototype. One of the typical simulations that is done is an integration simulation that simulates the hardware and software at the same time. Most simulators used in this task are monolithic simulators. Some simulators do have the ability to have external libraries and simulators interface with it, but the setup can be a tedious task. This thesis proposes, implements and evaluates a distributed simulator called PDQScS, that allows for speed up of the simulation to reduce this bottleneck in the design cycle without the tedious separation and linking by the user. Using multiple processes and SMP machines a simulation run time reduction was found.
|
28 |
Performance Analysis of Graph Algorithms using Graphics Processing UnitsWeng, Hui-Ze 02 September 2010 (has links)
The GPU significantly improves the computing power by increasing the number of cores in recent years.
The design principle of GPU focuses on the parallism of data processing.
Therefore, there is some limitation of GPU application for the better computing power.
For example, the processing of highly dependent data could not be well-paralleled.
Consequently, it could not take the advantage of the computing power improved by GPU.
Most of researches in GPU have discussed the improvement of computing power.
Therefore, we try to study the cost effectiveness by the comparison between GPU and Multi-Core CPU.
By well-applying the different hardware architectures of GPU and Multi-Core CPU,
we implement some typical algorithms, respectively, and show the experimental result.
Furthermore, the analysis of cost effectiveness, including time and money spending, is also well discussed in this paper.
|
29 |
Design and implementation of distributed GaloisDhanapal, Manoj 22 October 2013 (has links)
The Galois system provides a solution to the hard problem of parallelizing irregular algorithms using amorphous data-parallelism. The present system works on the shared-memory programming model. The programming model has limitations on the memory and processing power available to the application. A scalable distributed parallelization tool would give the application access to a very large amount of memory and processing power by interconnecting computers through a network.
This thesis presents the design for a distributed execution programming model for the Galois system. This distributed Galois system is capable of executing irregular graph based algorithms on a distributed environment. The API and programming model of the new distributed system has been designed to mirror that of the existing shared-memory Galois. This was done to enable existing applications on shared memory applications to run on distributed Galois with minimal porting effort. Finally, two existing test cases have been implemented on distributed Galois and shown to scale with increasing number of hosts and threads. / text
|
30 |
An OR parallel logic programming language : its compiler and abstract machineCheng, A. S. K. January 1988 (has links)
No description available.
|
Page generated in 0.0796 seconds