Spelling suggestions: "subject:"cographic aprocessing units"" "subject:"cographic aprocessing knits""
1 |
Sparse array representations and some selected array operations on GPUsWang, Hairong 01 September 2014 (has links)
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science. Johannesburg, 2014. / A multi-dimensional data model provides a good conceptual view of the data in data warehousing and On-Line
Analytical Processing (OLAP). A typical representation of such a data model is as a multi-dimensional array
which is well suited when the array is dense. If the array is sparse, i.e., has a few number of non-zero elements
relative to the product of the cardinalities of the dimensions, using a multi-dimensional array to represent the
data set requires extremely large memory space while the actual data elements occupy a relatively small fraction
of the space. Existing storage schemes for Multi-Dimensional Sparse Arrays (MDSAs) of higher dimensions
k (k > 2), focus on optimizing the storage utilization, and offer little flexibility in data access efficiency.
Most efficient storage schemes for sparse arrays are limited to matrices that are arrays in 2 dimensions. In
this dissertation, we introduce four storage schemes for MDSAs that handle the sparsity of the array with two
primary goals; reducing the storage overhead and maintaining efficient data element access. These schemes,
including a well known method referred to as the Bit Encoded Sparse Storage (BESS), were evaluated and
compared on four basic array operations, namely construction of a scheme, large scale random element access,
sub-array retrieval and multi-dimensional aggregation. The four storage schemes being proposed, together
with the evaluation results are: i.) The extended compressed row storage (xCRS) which extends CRS method
for sparse matrix storage to sparse arrays of higher dimensions and achieves the best data element access
efficiency among the methods compared; ii.) The bit encoded xCRS (BxCRS) which optimizes the storage
utilization of xCRS by applying data compression methods with run length encoding, while maintaining its
data access efficiency; iii.) A hybrid approach (Hybrid) that provides the best control of the balance between
the storage utilization and data manipulation efficiency by combining xCRS and BESS. iv.) The PATRICIA
trie compressed storage (PTCS) which uses PATRICIA trie to store the valid non-zero array elements. PTCS
supports efficient data access, and has a unique property of supporting update operations conveniently. v.)
BESS performs the best for the multi-dimensional aggregation, closely followed by the other schemes.
We also addressed the problem of accelerating some selected array operations using General Purpose Computing
on Graphics Processing Unit (GPGPU). The experimental results showed different levels of speed up,
ranging from 2 to over 20 times, on large scale random element access and sub-array retrieval. In particular, we
utilized GPUs on the computation of the cube operator, a special case of multi-dimensional aggregation, using
BESS. This resulted in a 5 to 8 times of speed up compared with our CPU only implementation. The main
contributions of this dissertation include the developments, implementations and evaluations of four efficient
schemes to store multi-dimensional sparse arrays, as well as utilizing massive parallelism of GPUs for some
data warehousing operations.
|
2 |
FPGA prototyping of custom GPGPUsNigania, Nimit 08 January 2014 (has links)
Prototyping new systems on hardware is a time-consuming task with limited scope for architectural exploration. The aim of this work was to perform fast prototyping of general-purpose graphics processing units (GPGPUs) on field programmable gate arrays (FPGAs) using a novel tool chain. This hardware flow combined with the higher level simulation flow using the same source code allowed us to create a whole tool chain to study and build future architectures using new technologies. It also gave us enough flexibility at different granularities to make architectural decisions. We will also discuss some example systems that were built using this tool chain along with some results.
|
3 |
Développement d’algorithmes d’imagerie et de reconstruction sur architectures à unités de traitements parallèles pour des applications en contrôle non destructif / Development of imaging and reconstructions algorithms on parallel processing architectures for applications in non-destructive testingPedron, Antoine 28 May 2013 (has links)
La problématique de cette thèse se place à l’interface entre le domaine scientifique du contrôle non destructif par ultrasons (CND US) et l’adéquation algorithme architecture. Le CND US comprend un ensemble de techniques utilisées pour examiner un matériau, qu’il soit en production ou maintenance. Afin de détecter d’éventuels défauts, de les positionner et les dimensionner, des méthodes d’imagerie et de reconstruction ont été développées au CEA-LIST, dans la plateforme logicielle CIVA.L’évolution du matériel d’acquisition entraine une augmentation des volumes de données et par conséquent nécessite toujours plus de puissance de calcul pour parvenir à des reconstructions en temps interactif. L’évolution multicoeurs des processeurs généralistes (GPP), ainsi que l’arrivée de nouvelles architectures comme les GPU rendent maintenant possible l’accélération de ces algorithmes.Le but de cette thèse est d’évaluer les possibilités d’accélération de deux algorithmes de reconstruction sur ces architectures. Ces deux algorithmes diffèrent dans leurs possibilités de parallélisation. Pour un premier, la parallélisation sur GPP est relativement immédiate, contrairement à celle sur GPU qui nécessite une utilisation intensive des instructions atomiques. Quant au second, le parallélisme est plus simple à exprimer, mais l’ordonnancement des nids de boucles sur GPP, ainsi que l’ordonnancement des threads et une bonne utilisation de la mémoire partagée des GPU sont nécessaires pour obtenir un fonctionnement efficace. Pour ce faire, OpenMP, CUDA et OpenCL ont été utilisés et comparés. L’intégration de ces prototypes dans la plateforme CIVA a mis en évidence un ensemble de problématiques liées à la maintenance et à la pérennisation de codes sur le long terme. / This thesis work is placed between the scientific domain of ultrasound non-destructive testing and algorithm-architecture adequation. Ultrasound non-destructive testing includes a group of analysis techniques used in science and industry to evaluate the properties of a material, component, or system without causing damage. In order to characterize possible defects, determining their position, size and shape, imaging and reconstruction tools have been developed at CEA-LIST, within the CIVA software platform.Evolution of acquisition sensors implies a continuous growth of datasets and consequently more and more computing power is needed to maintain interactive reconstructions. General purprose processors (GPP) evolving towards parallelism and emerging architectures such as GPU allow large acceleration possibilities than can be applied to these algorithms.The main goal of the thesis is to evaluate the acceleration than can be obtained for two reconstruction algorithms on these architectures. These two algorithms differ in their parallelization scheme. The first one can be properly parallelized on GPP whereas on GPU, an intensive use of atomic instructions is required. Within the second algorithm, parallelism is easier to express, but loop ordering on GPP, as well as thread scheduling and a good use of shared memory on GPU are necessary in order to obtain efficient results. Different API or libraries, such as OpenMP, CUDA and OpenCL are evaluated through chosen benchmarks. An integration of both algorithms in the CIVA software platform is proposed and different issues related to code maintenance and durability are discussed.
|
4 |
Paralelização do algoritmo FDK para reconstrução 3D de imagens tomográficas usando unidades gráficas de processamento e CUDA-C / Parallelization of the FDK algotithm for 3D reconstruction of tomographic images using graphic processing units and CUDA-CJoel Sánchez Domínguez 12 January 2012 (has links)
Conselho Nacional de Desenvolvimento Científico e Tecnológico / A obtenção de imagens usando tomografia computadorizada revolucionou o
diagnóstico de doenças na medicina e é usada amplamente em diferentes áreas da pesquisa
científica. Como parte do processo de obtenção das imagens tomográficas tridimensionais um
conjunto de radiografias são processadas por um algoritmo computacional, o mais usado
atualmente é o algoritmo de Feldkamp, David e Kress (FDK). Os usos do processamento
paralelo para acelerar os cálculos em algoritmos computacionais usando as diferentes
tecnologias disponíveis no mercado têm mostrado sua utilidade para diminuir os tempos de
processamento. No presente trabalho é apresentada a paralelização do algoritmo de
reconstrução de imagens tridimensionais FDK usando unidades gráficas de processamento
(GPU) e a linguagem CUDA-C. São apresentadas as GPUs como uma opção viável para
executar computação paralela e abordados os conceitos introdutórios associados à tomografia
computadorizada, GPUs, CUDA-C e processamento paralelo. A versão paralela do algoritmo
FDK executada na GPU é comparada com uma versão serial do mesmo, mostrando maior
velocidade de processamento. Os testes de desempenho foram feitos em duas GPUs de
diferentes capacidades: a placa NVIDIA GeForce 9400GT (16 núcleos) e a placa NVIDIA
Quadro 2000 (192 núcleos). / The imaging using computed tomography has revolutionized the diagnosis of diseases
in medicine and is widely used in different areas of scientific research. As part of the process
to obtained three-dimensional tomographic images a set of x-rays are processed by a
computer algorithm, the most widely used algorithm is Feldkamp, David and Kress (FDK).
The use of parallel processing to speed up calculations on computer algorithms with the
different available technologies, showing their usefulness to decrease processing times. In the
present paper presents the parallelization of the algorithm for three-dimensional image
reconstruction FDK using graphics processing units (GPU) and CUDA-C. GPUs are shown as
a viable option to perform parallel computing and addressed the introductory concepts
associated with computed tomographic, GPUs, CUDA-C and parallel processing. The parallel
version of the FDK algorithm is executed on the GPU and compared to a serial version of the
same, showing higher processing speed. Performance tests were made in two GPUs with
different capacities, the NVIDIA GeForce 9400GT (16 cores) and NVIDIA GeForce 2000
(192 cores).
|
5 |
Paralelização do algoritmo FDK para reconstrução 3D de imagens tomográficas usando unidades gráficas de processamento e CUDA-C / Parallelization of the FDK algotithm for 3D reconstruction of tomographic images using graphic processing units and CUDA-CJoel Sánchez Domínguez 12 January 2012 (has links)
Conselho Nacional de Desenvolvimento Científico e Tecnológico / A obtenção de imagens usando tomografia computadorizada revolucionou o
diagnóstico de doenças na medicina e é usada amplamente em diferentes áreas da pesquisa
científica. Como parte do processo de obtenção das imagens tomográficas tridimensionais um
conjunto de radiografias são processadas por um algoritmo computacional, o mais usado
atualmente é o algoritmo de Feldkamp, David e Kress (FDK). Os usos do processamento
paralelo para acelerar os cálculos em algoritmos computacionais usando as diferentes
tecnologias disponíveis no mercado têm mostrado sua utilidade para diminuir os tempos de
processamento. No presente trabalho é apresentada a paralelização do algoritmo de
reconstrução de imagens tridimensionais FDK usando unidades gráficas de processamento
(GPU) e a linguagem CUDA-C. São apresentadas as GPUs como uma opção viável para
executar computação paralela e abordados os conceitos introdutórios associados à tomografia
computadorizada, GPUs, CUDA-C e processamento paralelo. A versão paralela do algoritmo
FDK executada na GPU é comparada com uma versão serial do mesmo, mostrando maior
velocidade de processamento. Os testes de desempenho foram feitos em duas GPUs de
diferentes capacidades: a placa NVIDIA GeForce 9400GT (16 núcleos) e a placa NVIDIA
Quadro 2000 (192 núcleos). / The imaging using computed tomography has revolutionized the diagnosis of diseases
in medicine and is widely used in different areas of scientific research. As part of the process
to obtained three-dimensional tomographic images a set of x-rays are processed by a
computer algorithm, the most widely used algorithm is Feldkamp, David and Kress (FDK).
The use of parallel processing to speed up calculations on computer algorithms with the
different available technologies, showing their usefulness to decrease processing times. In the
present paper presents the parallelization of the algorithm for three-dimensional image
reconstruction FDK using graphics processing units (GPU) and CUDA-C. GPUs are shown as
a viable option to perform parallel computing and addressed the introductory concepts
associated with computed tomographic, GPUs, CUDA-C and parallel processing. The parallel
version of the FDK algorithm is executed on the GPU and compared to a serial version of the
same, showing higher processing speed. Performance tests were made in two GPUs with
different capacities, the NVIDIA GeForce 9400GT (16 cores) and NVIDIA GeForce 2000
(192 cores).
|
6 |
輿論對外匯趨勢的影響 / The effects of public opinions on exchange rate movements林子翔, Lin, Tzu Hsiang Unknown Date (has links)
本研究要探討的是在新聞、論壇和社群媒體討論的相關訊息是否真的會影響匯率的運動的假設。對於這樣的研究目標,我們建立了一個實驗,首先以文字探勘技術應用在新聞、論壇與社群媒體來產生與匯率相關的數值表示。接著,機器學習技術應用於學習得到的數值表示和匯率波動之間的關係。最後,我們證明透過檢驗所獲得的關係的有效性的假設。在此研究中,我們提出一種兩階段的神經網路來學習與預測每日美金兌台幣匯率的走勢。不同於其他專注於新聞或者社群媒體的研究,我們將他們進行整合,並將論壇的討論納為輸入資料。不同的資料組合產生出多種觀點,而三個資料來源的不同組合可能會以不同的方式影響預測準確率。透過該方法,初步實驗的結果顯示此方法優於隨機漫步模型。 / This study wants to explore the hypothesis that the relevant information in the news, the posts in forums and discussions on the social media can really affect the daily movement of exchange rates. For such study objective, we set up an experiment, where the text mining technique is first applied to the news, the forum and the social media to generate numerical representations regarding the textual information relevant with the exchange rate. Then the machine learning technique is applied to learn the relationship between the derived numerical representations and the movement of exchange rates. At the end, we justify the hypothesis through examining the effectiveness of the obtained relationship. In this paper, we propose a hybrid neural networks to learn and forecast the daily movements of USD/TWD exchange rates. Different from other studies, which focus on news or social media, we integrate them and add the discussion of forum as input data. Different data combinations yield many views while different combination of three data sources might affect the forecasting accuracy rate in different ways. As a result of this method, the experiment result was better than random walk model.
|
7 |
Parallélisation de simulations interactives de champs ultrasonores pour le contrôle non destructif / Parallelization of ultrasonic field simulations for non destructive testingLambert, Jason 03 July 2015 (has links)
La simulation est de plus en plus utilisée dans le domaine industriel du Contrôle Non Destructif. Elle est employée tout au long du processus de contrôle, que ce soit pour en accélérer la mise au point ou en comprendre les résultats. Les travaux menés au cours de cette thèse présentent une méthode de calcul rapide de champ ultrasonore rayonné par un capteur multi-éléments dans une pièce isotrope, permettant un usage interactif des simulations. Afin de tirer parti des architectures parallèles communément disponibles, un modèle régulier (qui limite au maximum les branchements divergents) dérivé du modèle générique présent dans la plateforme logicielle CIVA a été mis au point. Une première implémentation de référence a permis de le valider par rapport aux résultats CIVA et d'analyser son comportement en termes de performances. Le code a ensuite été porté et optimisé sur trois classes d'architectures parallèles aujourd'hui disponibles dans les stations de calcul : le processeur généraliste central (GPP), le coprocesseur manycore (Intel MIC) et la carte graphique (nVidia GPU). Concernant le processeur généraliste et le coprocesseur manycore, l'algorithme a été réorganisé et le code implémenté afin de tirer parti des deux niveaux de parallélisme disponibles, le multithreading et les instructions vectorielles. Sur la carte graphique, les différentes étapes de simulation de champ ont été découpées en une série de noyaux CUDA. Enfin, des bibliothèques de calculs spécifiques à ces architectures, Intel MKL et nVidia cuFFT, ont été utilisées pour effectuer les opérations de Transformées de Fourier Rapides. Les performances et la bonne adéquation des codes produits ont été analysées en détail pour chaque architecture. Dans plusieurs cas, sur des configurations de contrôle réalistes, des performances autorisant l'interactivité ont été atteintes. Des perspectives pour traiter des configurations plus complexes sont dressées. Enfin la problématique de l'industrialisation de ce type de code dans la plateforme logicielle CIVA est étudiée. / The Non Destructive Testing field increasingly uses simulation.It is used at every step of the whole control process of an industrial part, from speeding up control development to helping experts understand results. During this thesis, a simulation tool dedicated to the fast computation of an ultrasonic field radiated by a phase array probe in an isotropic specimen has been developped. Its performance enables an interactive usage. To benefit from the commonly available parallel architectures, a regular model (aimed at removing divergent branching) derived from the generic CIVA model has been developped. First, a reference implementation was developped to validate this model against CIVA results, and to analyze its performance behaviour before optimization. The resulting code has been optimized for three kinds of parallel architectures commonly available in workstations: general purpose processors (GPP), manycore coprocessors (Intel MIC) and graphics processing units (nVidia GPU). On the GPP and the MIC, the algorithm was reorganized and implemented to benefit from both parallelism levels, multhreading and vector instructions. On the GPU, the multiple steps of field computing have been divided in multiple successive CUDA kernels.Moreover, libraries dedicated to each architecture were used to speedup Fast Fourier Transforms, Intel MKL on GPP and MIC and nVidia cuFFT on GPU. Performance and hardware adequation of the produced algorithms were thoroughly studied for each architecture. On multiple realistic control configurations, interactive performance was reached. Perspectives to adress more complex configurations were drawn. Finally, the integration and the industrialization of this code in the commercial NDT plateform CIVA is discussed.
|
8 |
GPU-enhanced power flow analysis / Calcul de Flux de Puissance amélioré grâce aux Processeurs GraphiquesMarin, Manuel 11 December 2015 (has links)
Cette thèse propose un large éventail d'approches afin d'améliorer différents aspects de l'analyse des flux de puissance avec comme fils conducteur l'utilisation du processeurs graphiques (GPU). Si les GPU ont rapidement prouvés leurs efficacités sur des applications régulières pour lesquelles le parallélisme de données était facilement exploitable, il en est tout autrement pour les applications dites irrégulières. Ceci est précisément le cas de la plupart des algorithmes d'analyse de flux de puissance. Pour ce travail, nous nous inscrivons dans cette problématique d'optimisation de l'analyse de flux de puissance à l'aide de coprocesseur de type GPU. L'intérêt est double. Il étend le domaine d'application des GPU à une nouvelle classe de problème et/ou d'algorithme en proposant des solutions originales. Il permet aussi à l'analyse des flux de puissance de rester pertinent dans un contexte de changements continus dans les systèmes énergétiques, et ainsi d'en faciliter leur évolution. Nos principales contributions liées à la programmation sur GPU sont: (i) l'analyse des différentes méthodes de parcours d'arbre pour apporter une réponse au problème de la régularité par rapport à l'équilibrage de charge ; (ii) l'analyse de l'impact du format de représentation sur la performance des implémentations d'arithmétique floue. Nos contributions à l'analyse des flux de puissance sont les suivantes: (ii) une nouvelle méthode pour l'évaluation de l'incertitude dans l'analyse des flux de puissance ; (ii) une nouvelle méthode de point fixe pour l'analyse des flux de puissance, problème que l'on qualifie d'intrinsèquement parallèle. / This thesis addresses the utilization of Graphics Processing Units (GPUs) for improving the Power Flow (PF) analysis of modern power systems. Currently, GPUs are challenged by applications exhibiting an irregular computational pattern, as is the case of most known methods for PF analysis. At the same time, the PF analysis needs to be improved in order to cope with new requirements of efficiency and accuracy coming from the Smart Grid concept. The relevance of GPU-enhanced PF analysis is twofold. On one hand, it expands the application domain of GPU to a new class of problems. On the other hand, it consistently increases the computational capacity available for power system operation and design. The present work attempts to achieve that in two complementary ways: (i) by developing novel GPU programming strategies for available PF algorithms, and (ii) by proposing novel PF analysis methods that can exploit the numerous features present in GPU architectures. Specific contributions on GPU computing include: (i) a comparison of two programming paradigms, namely regularity and load-balancing, for implementing the so-called treefix operations; (ii) a study of the impact of the representation format over performance and accuracy, for fuzzy interval algebraic operations; and (iii) the utilization of architecture-specific design, as a novel strategy to improve performance scalability of applications. Contributions on PF analysis include: (i) the design and evaluation of a novel method for the uncertainty assessment, based on the fuzzy interval approach; and (ii) the development of an intrinsically parallel method for PF analysis, which is not affected by the Amdahl's law.
|
Page generated in 0.0576 seconds