Spelling suggestions: "subject:"3dgraphics processing unit"" "subject:"andgraphics processing unit""
61 |
Cooperative Execution of Opencl Programs on Multiple Heterogeneous DevicesPandit, Prasanna Vasant January 2013 (has links) (PDF)
Computing systems have become heterogeneous with the increasing prevalence of multi-core CPUs, Graphics Processing Units (GPU) and other accelerators in them. OpenCL has emerged as an attractive programming framework for heterogeneous systems. However, utilizing mul- tiple devices in OpenCL is a challenge as it requires the programmer to explicitly map data and computation to each device. Utilizing multiple devices simultaneously to speed up execu- tion of a kernel is even more complex, as the relative execution time of the kernel on different devices can vary significantly. Also, after each kernel execution, a coherent version of the data needs to be established. This means that, in order to utilize all devices effectively, the programmer has to spend considerable time and effort to distribute work across all devices, keep track of modified data in these devices and correctly perform a merging step to put the data together. Further, the relative performance of a program may vary across different inputs, which means a statically determined work distribution may not work well.
In this work, we present FluidiCL, an OpenCL runtime that takes a program written for a single device and uses multiple heterogeneous devices to execute each kernel. The runtime performs dynamic work distribution and cooperatively executes each kernel on all available devices. Since we consider a setup with devices having discrete address spaces, our solution ensures that execution of OpenCL work-groups on devices is adjusted by taking into account the overheads for data management. The data transfers and data merging needed to ensure coherence are handled transparently without requiring any effort from the programmer. Flu- idiCL also does not require prior training or profiling and is completely portable across dif- ferent machines. Because it is dynamic, the runtime is able to adapt to system load. We have developed several optimizations for improving the performance of FluidiCL. We evaluate the runtime across different sets of devices. On a machine with an Intel quad-core processor and an NVidia Fermi GPU, FluidiCL shows a geomean speedup of nearly 64% over the GPU, 88% over the CPU and 14% over the best of the two devices in each benchmark. In all benchmarks, performance of our runtime comes to within 13% of the best of the two devices. FluidiCL shows similar results on a machine with a quad-core CPU and an NVidia Kepler GPU, with up to 26% speedup over the best of the two. We also present results considering an Intel Xeon Phi accelerator and a CPU and find that FluidiCL performs up to 45% faster than the best of the two devices. We extend FluidiCL from a CPU–GPU scenario to a three-device setup hav- ing a quad-core CPU, an NVidia Kepler GPU and an Intel Xeon Phi accelerator and find that FluidiCL obtains a geomean improvement of 6% in kernel execution time over the best of the three devices considered in each case.
|
62 |
Performance Analysis of Non Local Means Algorithm using Hardware AcceleratorsAntony, Daniel Sanju January 2016 (has links) (PDF)
Image De-noising forms an integral part of image processing. It is used as a standalone algorithm for improving the quality of the image obtained through camera as well as a starting stage for image processing applications like face recognition, super resolution etc. Non Local Means (NL-Means) and Bilateral Filter are two computationally complex de-noising algorithms which could provide good de-noising results. Due to its computational complexity, the real time applications associated with these letters are limited.
In this thesis, we propose the use of hardware accelerators such as GPU (Graphics Processing Units) and FPGA (Field Programmable Gate Arrays) to speed up the filter execution and efficiently implement using them. GPU based implementation of these letters is carried out using Open Computing Language (Open CL). The basic objective of this research is to perform high speed de-noising without compromising on the quality. Here we implement a basic NL-Means filter, a Fast NL-Means filter, and Bilateral filter using Gauss Polynomial decomposition on GPU. We also propose a modification to the existing NL-Means algorithm and Gauss Polynomial Bilateral filter. Instead of Gaussian Spatial Kernel used in standard algorithm, Box Spatial kernel is introduced to improve the speed of execution of the algorithm. This research work is a step forward towards making the real time implementation of these algorithms possible. It has been found from results that the NL-Means implementation on GPU using Open CL is about 25x faster than regular CPU based implementation for larger images (1024x1024). For Fast NL-Means, GPU based implementation is about 90x faster than CPU implementation. Even with the improved execution time, the embedded system application of the NL-Means is limited due to the power and thermal restrictions of the GPU device. In order to create a low power and faster implementation, we have implemented the algorithm on FPGA. FPGAs are reconfigurable devices and enable us to create a custom architecture for the parallel execution of the algorithm. It was found that the execution time for smaller images (256x256) is about 200x faster than CPU implementation and about 25x faster than GPU execution. Moreover the power requirements of the FPGA design of the algorithm (0.53W) is much less compared to CPU(30W) and GPU(200W).
|
63 |
Benchmark pro zařízení s podporou OpenGL ES 3.0 / Benchmark for OpenGL ES 3.0 DevicesKimer, Tomáš January 2014 (has links)
This thesis deals with the development of benchmark application for the OpenGL ES 3.0 devices using the realistic real-time rendering of 3D scenes. The first part covers the history and new features of the OpenGL ES 3.0 graphics library. Next part briefly describes selected algorithms for the realistic real-time rendering of 3D scenes which can be implemented using the new features of the discussed library. The design of benchmark application is covered next, including the design of online result database containing detailed device specifications. The last part covers implementation on Android and Windows platforms and the testing on mobile devices after publishing the application on Google Play. Finally, the results and possibilites of further development are discussed.
|
64 |
Investigation of hierarchical deep neural network structure for facial expression recognitionMotembe, Dodi 01 1900 (has links)
Facial expression recognition (FER) is still a challenging concept, and machines struggle to
comprehend effectively the dynamic shifts in facial expressions of human emotions. The
existing systems, which have proven to be effective, consist of deeper network structures that
need powerful and expensive hardware. The deeper the network is, the longer the training and
the testing. Many systems use expensive GPUs to make the process faster. To remedy the
above challenges while maintaining the main goal of improving the accuracy rate of the
recognition, we create a generic hierarchical structure with variable settings. This generic
structure has a hierarchy of three convolutional blocks, two dropout blocks and one fully
connected block. From this generic structure we derived four different network structures to
be investigated according to their performances. From each network structure case, we again
derived six network structures in relation to the variable parameters. The variable parameters
under analysis are the size of the filters of the convolutional maps and the max-pooling as
well as the number of convolutional maps. In total, we have 24 network structures to
investigate, and six network structures per case. After simulations, the results achieved after
many repeated experiments showed in the group of case 1; case 1a emerged as the top
performer of that group, and case 2a, case 3c and case 4c outperformed others in their
respective groups. The comparison of the winners of the 4 groups indicates that case 2a is the
optimal structure with optimal parameters; case 2a network structure outperformed other
group winners. Considerations were done when choosing the best network structure,
considerations were; minimum accuracy, average accuracy and maximum accuracy after 15
times of repeated training and analysis of results. All 24 proposed network structures were
tested using two of the most used FER datasets, the CK+ and the JAFFE. After repeated
simulations the results demonstrate that our inexpensive optimal network architecture
achieved 98.11 % accuracy using the CK+ dataset. We also tested our optimal network
architecture with the JAFFE dataset, the experimental results show 84.38 % by using just a
standard CPU and easier procedures. We also compared the four group winners with other
existing FER models performances recorded recently in two studies. These FER models used
the same two datasets, the CK+ and the JAFFE. Three of our four group winners (case 1a,
case 2a and case 4c) recorded only 1.22 % less than the accuracy of the top performer model
when using the CK+ dataset, and two of our network structures, case 2a and case 3c came in
third, beating other models when using the JAFFE dataset. / Electrical and Mining Engineering
|
65 |
On GPU Assisted Polar Decoding : Evaluating the Parallelization of the Successive Cancellation Algorithmusing Graphics Processing Units / Polärkodning med hjälp av GPU:er : En utvärdering av parallelliseringmöjligheterna av SuccessiveCancellation-algoritmen med hjälp av grafikprocessorerNordqvist, Siri January 2023 (has links)
In telecommunication, messages sent through a wireless medium often experience noise interfering with the signal in a way that corrupts the messages. As the demand for high throughput in the mobile network is increasing, algorithms that can detectand correct these corrupted messages quickly and accurately are of interest to the industry. Polar codes have been chosen by the Third Generation Partnership Project as the error correction code for 5G New Radio control channels. This thesis work aimed to investigate whether the polar code Successive Cancellation (SC) could be parallelized and if a graphics processing unit (GPU) can be utilized to optimize the execution time of the algorithm. The polar code Successive Cancellation was enhanced by implementing tree pruning and support for GPUs to leverage their parallelization. The difference in execution time between the concurrent and sequential versions of the SC algorithm with and without tree pruning was evaluated. The tree pruning SC algorithm almost always offered shorter execution times than the SC algorithm that did not employ treepruning. However, the support for GPUs did not reduce the execution time in these tests. Thus, the GPU is not certain to be able to improve this type of enhanced SC algorithm based on these results. / Meddelanden som överförs över ett mobilt nät utsätts ofta för brus som distorterar dem. I takt med att intresset ökat för hög genomströmning i mobilnätet har också intresset för algoritmer som snabbt och tillförlitligt kan upptäcka och korrigera distorderade meddelanden ökat. Polarkoder har valts av "Third Generation Partnership Project" som den klass av felkorrigeringskoder som ska användas för 5G:s radiokontrollkanaler. Detta examensarbete hade som syfte att undersöka om polarkoden "Successive Cancellation" (SC) skulle kunna parallelliseras och om en grafisk bearbetningsenhet (GPU) kan användas för att optimera exekveringstiden för algoritmen. SC utökades med stöd för trädbeskärning och parallellisering med hjälp av GPU:er. Skillnaden i exekveringstid mellan de parallella och sekventiella versionerna av SC-algoritmen med och utan trädbeskärning utvärderades. SC-algoritmen för trädbeskärning erbjöd nästan alltid kortare exekveringstider än SC-algoritmen som inte använde trädbeskärning. Stödet för GPU:er minskade dock inte exekveringstiden. Således kan man med dessa resultat inte med säkerhet säga att GPU-stöd skulle gynna SC-algoritmen.
|
66 |
Implementation and optimization of LDPC decoding algorithms tailored for Nvidia GPUs in 5G / Implementering och optimering av LDPC avkodningsalgoritmer anpassat för Nvidia GPU:er i 5GSalomonsson, Benjamin January 2022 (has links)
Low-Density Parity-Check (LDPC) codes are linear error-correcting codes used to establish reliable communication between units on a noisy transmission channel in mobile telecommunications. LDPC algorithms detect and recover altered or corrupted message bits using sparse parity-check matrices in order to decipher messages correctly. LDPC codes have been shown to be fitting coding schemes for the fifth generation (5G) New Radio (NR), according to the third generation partnership project (3GPP). TietoEvry, a consultant in telecom, has discovered that optimizations of LDPC decoding algorithms can be achieved/obtained with the use of a parallel computing platform called Compute Unified Device Architecture (CUDA), developed by NVIDIA. This platform utilizes the capabilities of a graphics processing unit (GPU) rather than a central processing unit (CPU), which in turn provides parallel computing. An optimized version of an LDPC decoding algorithm, the Min-Sum Algorithm (MSA), is implemented in CUDA and in C++ for comparison in terms of CPU execution time, to explore the capabilities that CUDA offers. The testing is done with a set of 12 sparse parity-check matrices and input-channel messages with different sizes. As a result, the CUDA implementation executes approximately 55% faster than a standard, unoptimized C++ implementation.
|
67 |
Methods for 3D Structured Light Sensor Calibration and GPU Accelerated ColormapKurella, Venu January 2018 (has links)
In manufacturing, metrological inspection is a time-consuming process.
The higher the required precision in inspection, the longer the
inspection time. This is due to both slow devices that collect
measurement data and slow computational methods that process the data.
The goal of this work is to propose methods to speed up some of these
processes. Conventional measurement devices like Coordinate Measuring
Machines (CMMs) have high precision but low measurement speed while
new digitizer technologies have high speed but low precision. Using
these devices in synergy gives a significant improvement in the
measurement speed without loss of precision. The method of synergistic
integration of an advanced digitizer with a CMM is discussed.
Computational aspects of the inspection process are addressed next. Once
a part is measured, measurement data is compared against its
model to check for tolerances. This comparison is a time-consuming
process on conventional CPUs. We developed and benchmarked some GPU accelerations. Finally, naive data fitting methods can produce misleading results in cases with non-uniform data. Weighted total least-squares methods can compensate for non-uniformity. We show how they can be accelerated with GPUs, using plane fitting as an example. / Thesis / Doctor of Philosophy (PhD)
|
68 |
Modeling and Analysis of Large-Scale On-Chip InterconnectsFeng, Zhuo 2009 December 1900 (has links)
As IC technologies scale to the nanometer regime, efficient and accurate modeling
and analysis of VLSI systems with billions of transistors and interconnects becomes
increasingly critical and difficult. VLSI systems impacted by the increasingly high
dimensional process-voltage-temperature (PVT) variations demand much more modeling
and analysis efforts than ever before, while the analysis of large scale on-chip
interconnects that requires solving tens of millions of unknowns imposes great challenges
in computer aided design areas. This dissertation presents new methodologies
for addressing the above two important challenging issues for large scale on-chip interconnect
modeling and analysis:
In the past, the standard statistical circuit modeling techniques usually employ
principal component analysis (PCA) and its variants to reduce the parameter
dimensionality. Although widely adopted, these techniques can be very
limited since parameter dimension reduction is achieved by merely considering
the statistical distributions of the controlling parameters but neglecting
the important correspondence between these parameters and the circuit performances
(responses) under modeling. This dissertation presents a variety of
performance-oriented parameter dimension reduction methods that can lead to
more than one order of magnitude parameter reduction for a variety of VLSI
circuit modeling and analysis problems.
The sheer size of present day power/ground distribution networks makes their
analysis and verification tasks extremely runtime and memory inefficient, and
at the same time, limits the extent to which these networks can be optimized.
Given today?s commodity graphics processing units (GPUs) that can deliver
more than 500 GFlops (Flops: floating point operations per second). computing
power and 100GB/s memory bandwidth, which are more than 10X greater
than offered by modern day general-purpose quad-core microprocessors, it is
very desirable to convert the impressive GPU computing power to usable design
automation tools for VLSI verification. In this dissertation, for the first time, we
show how to exploit recent massively parallel single-instruction multiple-thread
(SIMT) based graphics processing unit (GPU) platforms to tackle power grid
analysis with very promising performance. Our GPU based network analyzer
is capable of solving tens of millions of power grid nodes in just a few seconds.
Additionally, with the above GPU based simulation framework, more challenging
three-dimensional full-chip thermal analysis can be solved in a much more
efficient way than ever before.
|
69 |
Efficient betweenness Centrality Computations on Hybrid CPU-GPU SystemsMishra, Ashirbad January 2016 (has links) (PDF)
Analysis of networks is quite interesting, because they can be interpreted for several purposes. Various features require different metrics to measure and interpret them. Measuring the relative importance of each vertex in a network is one of the most fundamental building blocks in network analysis. Between’s Centrality (BC) is one such metric that plays a key role in many real world applications. BC is an important graph analytics application for large-scale graphs. However it is one of the most computationally intensive kernels to execute, and measuring centrality in billion-scale graphs is quite challenging.
While there are several existing e orts towards parallelizing BC algorithms on multi-core CPUs and many-core GPUs, in this work, we propose a novel ne-grained CPU-GPU hybrid algorithm that partitions a graph into two partitions, one each for CPU and GPU. Our method performs BC computations for the graph on both the CPU and GPU resources simultaneously, resulting in a very small number of CPU-GPU synchronizations, hence taking less time for communications. The BC algorithm consists of two phases, the forward phase and the backward phase. In the forward phase, we initially and the paths that are needed by either partitions, after which each partition is executed on each processor in an asynchronous manner. We initially compute border matrices for each partition which stores the relative distances between each pair of border vertex in a partition. The matrices are used in the forward phase calculations of all the sources. In this way, our hybrid BC algorithm leverages the multi-source property inherent in the BC problem. We present proof of correctness and the bounds for the number of iterations for each source. We also perform a novel hybrid and asynchronous backward phase, in which each partition communicates with the other only when there is a path that crosses the partition, hence it performs minimal CPU-GPU synchronizations.
We use a variety of implementations for our work, like node-based and edge based parallelism, which includes data-driven and topology based techniques. In the implementation we show that our method also works using variable partitioning technique. The technique partitions the graph into unequal parts accounting for the processing power of each processor. Our implementations achieve almost equal percentage of utilization on both the processors due to the technique. For large scale graphs, the size of the border matrix also becomes large, hence to accommodate the matrix we present various techniques. The techniques use the properties inherent in the shortest path problem for reduction. We mention the drawbacks of performing shortest path computations on a large scale and also provide various solutions to it.
Evaluations using a large number of graphs with different characteristics show that our hybrid approach without variable partitioning and border matrix reduction gives 67% improvement in performance, and 64-98.5% less CPU-GPU communications than the state of art hybrid algorithm based on the popular Bulk Synchronous Paradigm (BSP) approach implemented in TOTEM. This shows our algorithm's strength which reduces the need for larger synchronizations. Implementing variable partitioning, border matrix reduction and backward phase optimizations on our hybrid algorithm provides up to 10x speedup. We compare our optimized implementation, with CPU and GPU standalone codes based on our forward phase and backward phase kernels, and show around 2-8x speedup over the CPU-only code and can accommodate large graphs that cannot be accommodated in the GPU-only code. We also show that our method`s performance is competitive to the state of art multi-core CPU and performs 40-52% better than GPU implementations, on large graphs. We show the drawbacks of CPU and GPU only implementations and try to motivate the reader about the challenges that graph algorithms face in large scale computing, suggesting that a hybrid or distributed way of approaching the problem is a better way of overcoming the hurdles.
|
70 |
Automatic Data Allocation, Buffer Management And Data Movement For Multi-GPU MachinesRamashekar, Thejas 10 1900 (has links) (PDF)
Multi-GPU machines are being increasingly used in high performance computing. These machines are being used both as standalone work stations to run computations on medium to large data sizes (tens of gigabytes) and as a node in a CPU-Multi GPU cluster handling very large data sizes (hundreds of gigabytes to a few terabytes). Each GPU in such a machine has its own memory and does not share the address space either with the host CPU or other GPUs. Hence, applications utilizing multiple GPUs have to manually allocate and managed at a on each GPU.
A significant body of scientific applications that utilize multi-GPU machines contain computations inside affine loop nests, i.e., loop nests that have affine bounds and affine array access functions. These include stencils, linear-algebra kernels, dynamic programming codes and data-mining applications. Data allocation, buffer management, and coherency handling are critical steps that need to be performed to run affine applications on multi-GPU machines. Existing works that propose to automate these steps have limitations and in efficiencies in terms of allocation sizes, exploiting reuse, transfer costs and scalability. An automatic multi-GPU memory manager that can overcome these limitations and enable applications to achieve salable performance is highly desired.
One technique that has been used in certain memory management contexts in the literature is that of bounding boxes. The bounding box of an array, for a given tile, is the smallest hyper-rectangle that encapsulates all the array elements accessed by that tile. In this thesis, we exploit the potential of bounding boxes for memory management far beyond their current usage in the literature.
In this thesis, we propose a scalable and fully automatic data allocation and buffer management scheme for affine loop nests on multi-GPU machines. We call it the Bounding Box based Memory Manager (BBMM). BBMM is a compiler-assisted runtime memory manager. At compile time, it use static analysis techniques to identify a set of bounding boxes accessed by a computation tile. At run time, it uses the bounding box set operations such as union, intersection, difference, finding subset and superset relation to compute a set of disjoint bounding boxes from the set of bounding boxes identified at compile time. It also exploits the architectural capability provided by GPUs to perform fast transfers of rectangular (strided) regions of memory and hence performs all data transfers in terms of bounding boxes. BBMM uses these techniques to automatically allocate, and manage data required by applications (suitably tiled and parallelized for GPUs). This allows It to (1) allocate only as much data (or close to) as is required by computations running on each GPU, (2) efficiently track buffer allocations and hence, maximize data reuse across tiles and minimize the data transfer overhead, (3) and as a result, enable applications to maximize the utilization of the combined memory on multi-GPU machines. BBMM can work with any choice of parallelizing transformations, computation placement, and scheduling schemes, whether static or dynamic. Experiments run on a system with four GPUs with various scientific programs showed that BBMM is able to reduce data allocations on each GPU by up to 75% compared to current allocation schemes, yield at least 88% of the performance of hand-optimized Open CL codes and allows excellent weak scaling.
|
Page generated in 0.0882 seconds