Spelling suggestions: "subject:"aprocessing unit"" "subject:"eprocessing unit""
1 |
Dynamic warp formation : exploiting thread scheduling for efficient MIMD control flow on SIMD graphics hardwareFung, Wilson Wai Lun 11 1900 (has links)
Recent advances in graphics processing units (GPUs) have resulted in massively parallel hardware that is easily programmable and widely available in commodity desktop computer systems. GPUs typically use single-instruction, multiple-data (SIMD) pipelines to achieve high performance with minimal overhead for control hardware. Scalar threads running the same computing kernel are grouped together into SIMD batches, sometimes referred to as warps. While SIMD is ideally suited for simple programs, recent GPUs include control flow instructions in the GPU instruction set architecture and programs using these instructions may experience reduced performance due to the way branch execution is supported by hardware. One solution is to add a stack to allow different SIMD processing elements to execute distinct program paths after a branch instruction. The occurrence of diverging branch outcomes for different processing elements significantly degrades performance using this approach. In this thesis, we propose dynamic warp formation and scheduling, a mechanism for more efficient SIMD branch execution on GPUs. It dynamically regroups threads into new warps on the fly following the occurrence of diverging branch outcomes. We show that a realistic hardware implementation of this mechanism improves performance by an average of 47% for an estimated area increase of 8%.
|
2 |
Dynamic warp formation : exploiting thread scheduling for efficient MIMD control flow on SIMD graphics hardwareFung, Wilson Wai Lun 11 1900 (has links)
Recent advances in graphics processing units (GPUs) have resulted in massively parallel hardware that is easily programmable and widely available in commodity desktop computer systems. GPUs typically use single-instruction, multiple-data (SIMD) pipelines to achieve high performance with minimal overhead for control hardware. Scalar threads running the same computing kernel are grouped together into SIMD batches, sometimes referred to as warps. While SIMD is ideally suited for simple programs, recent GPUs include control flow instructions in the GPU instruction set architecture and programs using these instructions may experience reduced performance due to the way branch execution is supported by hardware. One solution is to add a stack to allow different SIMD processing elements to execute distinct program paths after a branch instruction. The occurrence of diverging branch outcomes for different processing elements significantly degrades performance using this approach. In this thesis, we propose dynamic warp formation and scheduling, a mechanism for more efficient SIMD branch execution on GPUs. It dynamically regroups threads into new warps on the fly following the occurrence of diverging branch outcomes. We show that a realistic hardware implementation of this mechanism improves performance by an average of 47% for an estimated area increase of 8%.
|
3 |
Dynamic warp formation : exploiting thread scheduling for efficient MIMD control flow on SIMD graphics hardwareFung, Wilson Wai Lun 11 1900 (has links)
Recent advances in graphics processing units (GPUs) have resulted in massively parallel hardware that is easily programmable and widely available in commodity desktop computer systems. GPUs typically use single-instruction, multiple-data (SIMD) pipelines to achieve high performance with minimal overhead for control hardware. Scalar threads running the same computing kernel are grouped together into SIMD batches, sometimes referred to as warps. While SIMD is ideally suited for simple programs, recent GPUs include control flow instructions in the GPU instruction set architecture and programs using these instructions may experience reduced performance due to the way branch execution is supported by hardware. One solution is to add a stack to allow different SIMD processing elements to execute distinct program paths after a branch instruction. The occurrence of diverging branch outcomes for different processing elements significantly degrades performance using this approach. In this thesis, we propose dynamic warp formation and scheduling, a mechanism for more efficient SIMD branch execution on GPUs. It dynamically regroups threads into new warps on the fly following the occurrence of diverging branch outcomes. We show that a realistic hardware implementation of this mechanism improves performance by an average of 47% for an estimated area increase of 8%. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
|
4 |
Runtime Adaptation for Autonomic Heterogeneous ComputingScogland, Thomas R. 12 December 2014 (has links)
Heterogeneity is increasing across all levels of computing, with the rise of accelerators such as GPUs, FPGAs, and other coprocessors into everything from cell phones to supercomputers. More quietly it is increasing with the rise of NUMA systems, hierarchical caching, OS noise, and a myriad of other factors. As heterogeneity becomes a fact of life, efficiently managing heterogeneous compute resources is becoming a critical, and ever more complex, task. The focus of this dissertation is to lay the foundation for an autonomic system for heterogeneous computing, employing runtime adaptation to improve performance portability and performance consistency while maintaining or increasing programmability. We investigate heterogeneity arising from a myriad of factors, grouped into the dimensions of locality and capability. This work has resulted in runtime schedulers capable of automatically detecting and mitigating heterogeneity in physically homogeneous systems through MPI and adaptive coscheduling for physically heterogeneous accelerator based systems as well as a synthesis of the two to address multiple levels of heterogeneity as a coherent whole. We also discuss our current work towards the next generation of fine-grained scheduling and synchronization across heterogeneous platforms in the design of a highly-scalable and portable concurrent queue for many-core systems. Each component addresses aspects of the urgent need for automated management of the extreme and ever expanding complexity introduced by heterogeneity. / Ph. D.
|
5 |
Enabling rapid iterative model design within the laboratory environmentClayton, Thomas F. January 2009 (has links)
This thesis presents a proof of concept study for the better integration of the electrophysiological and modelling aspects of neuroscience. Members of these two sub-disciplines collaborate regularly, but due to differing resource requirements, and largely incompatible spheres of knowledge, cooperation is often impeded by miscommunication and delays. To reduce the model design time, and provide a platform for more efficient experimental analysis, a rapid iterative model design method is proposed. The main achievement of this work is the development of a rapid model evaluation method based on parameter estimation, utilising a combination of evolutionary algorithms (EAs) and graphics processing unit (GPU) hardware acceleration. This method is the primary force behind the better integration of modelling and laboratorybased electrophysiology, as it provides a generic model evaluation method that does not require prior knowledge of model structure, or expertise in modelling, mathematics, or computer science. If combined with a suitable intuitive and user targeted graphical user interface, the ideas presented in this thesis could be developed into a suite of tools that would enable new forms of experimentation to be performed. The latter part of this thesis investigates the use of excitability-based models as the basis of an iterative design method. They were found to be computationally and structurally simple, easily extensible, and able to reproduce a wide range of neural behaviours whilst still faithfully representing underlying cellular mechanisms. A case study was performed to assess the iterative design process, through the implementation of an excitability-based model. The model was extended iteratively, using the rapid model evaluation method, to represent a vasopressin releasing neuron. Not only was the model implemented successfully, but it was able to suggest the existence of other more subtle cell mechanisms, in addition to highlighting potential failings in previous implementations of the class of neuron.
|
6 |
Genetic programming and cellular automata for fast flood modelling on multi-core CPU and many-core GPU computersGibson, Michael John January 2015 (has links)
Many complex systems in nature are governed by simple local interactions, although a number are also described by global interactions. For example, within the field of hydraulics the Navier-Stokes equations describe free-surface water flow, through means of the global preservation of water volume, momentum and energy. However, solving such partial differential equations (PDEs) is computationally expensive when applied to large 2D flow problems. An alternative which reduces the computational complexity, is to use a local derivative to approximate the PDEs, such as finite difference methods, or Cellular Automata (CA). The high speed processing of such simulations is important to modern scientific investigation especially within urban flood modelling, as urban expansion continues to increase the number of impervious areas that need to be modelled. Large numbers of model runs or large spatial or temporal resolution simulations are required in order to investigate, for example, climate change, early warning systems, and sewer design optimisation. The recent introduction of the Graphics Processor Unit (GPU) as a general purpose computing device (General Purpose Graphical Processor Unit, GPGPU) allows this hardware to be used for the accelerated processing of such locally driven simulations. A novel CA transformation for use with GPUs is proposed here to make maximum use of the GPU hardware. CA models are defined by the local state transition rules, which are used in every cell in parallel, and provide an excellent platform for a comparative study of possible alternative state transition rules. Writing local state transition rules for CA systems is a difficult task for humans due to the number and complexity of possible interactions, and is known as the ‘inverse problem’ for CA. Therefore, the use of Genetic Programming (GP) algorithms for the automatic development of state transition rules from example data is also investigated in this thesis. GP is investigated as it is capable of searching the intractably large areas of possible state transition rules, and producing near optimal solutions. However, such population-based optimisation algorithms are limited by the cost of many repeated evaluations of the fitness function, which in this case requires the comparison of a CA simulation to given target data. Therefore, the use of GPGPU hardware for the accelerated learning of local rules is also developed. Speed-up factors of up to 50 times over serial Central Processing Unit (CPU) processing are achieved on simple CA, up to 5-10 times speedup over the fully parallel CPU for the learning of urban flood modelling rules. Furthermore, it is shown GP can generate rules which perform competitively when compared with human formulated rules. This is achieved with generalisation to unseen terrains using similar input conditions and different spatial/temporal resolutions in this important application domain.
|
7 |
Ανάπτυξη διαδικτυακής εφαρμογής για την εξομοίωση της λειτουργίας ενός επεξεργαστή με διευρυμένο ρεπερτόριο εντολώνΚάτσενος, Χρήστος 26 July 2012 (has links)
Αντικείμενο της παρούσας εργασίας είναι η εξομοίωση της λειτουργίας ενός επεξεργαστή με διευρυμένο ρεπερτόριο εντολών μέσω του διαδικτύου. Αναλυτικότερα αναπτύχθηκε ένα διαδικτυακό εργαλείο που δέχεται την αλληλουχία των εντολών και στην συνέχεια αφού εκτελέσει έλεγχο αυτών, συμβολομεταφράζει και αποθηκεύει τον κώδικα που προκύπτει στην μνήμη της εφαρμογής.
Αφού όλα τα παραπάνω έχουν ολοκληρωθεί και το πρόγραμμα έχει ελεγχθεί και αποθηκευθεί στην μνήμη τότε το γραφικό τμήμα της εφαρμογής αναλαμβάνει να εξομοιώσει την λειτουργία του επεξεργαστή, προβάλλοντας τις τιμές που παίρνουν οι καταχωρητές την κάθε στιγμή καθώς και την αλληλουχία των δεδομένων που μεταφέρονται από και προς αυτούς. / The purpose of this study is to simulate the operation of a processor with an expanded set of instructions through the Internet. In more details, it has been developed an online tool that accepts a sequence of instructions and then do various checks on them, compiles them and stores the code in application’s memory.
As long as all this has been completed and the program has been tested and stored in memory, the simulation part of the application starts, in order to simulate the operation of the processor, providing registers with the correct value each time and the sequence of data transferred to and from them.
|
8 |
A General-Purpose GPU Reservoir ComputerKeith, Tūreiti January 2013 (has links)
The reservoir computer comprises a reservoir of possibly non-linear, possibly chaotic dynamics. By perturbing and taking outputs from this reservoir, its dynamics may be harnessed to compute complex problems at “the edge of chaos”. One of the first forms of reservoir computer, the Echo State Network (ESN), is a form of artificial neural network that builds its reservoir from a large and sparsely connected recurrent neural network (RNN). The ESN was initially introduced as an innovative solution to train RNNs which, up until that point, was a notoriously difficult task. The innovation of the ESN is that, rather than train the RNN weights, only the output is trained. If this output is assumed to be linear, then linear regression may be used.
This work presents an effort to implement the Echo State Network, and an offline linear regression training method based on Tikhonov regularisation. This implementation targeted the general purpose graphics processing unit (GPU or GPGPU). The behaviour of the implementation was examined by comparing it with a central processing unit (CPU) implementation, and by assessing its performance against several studied learning problems. These assessments were performed using all 4 cores of the Intel i7-980 CPU and an Nvidia GTX480. When compared with a CPU implementation, the GPU ESN implementation demonstrated a speed-up starting from a reservoir size of between 512 and 1,024. A maximum speed-up of approximately 6 was observed at the largest reservoir size tested (2,048). The Tikhonov regularisation (TR) implementation was also compared with a CPU implementation. Unlike the ESN execution, the GPU TR implementation was largely slower than the CPU implementation. Speed-ups were observed at the largest reservoir and state history sizes, the largest of which was 2.6813. The learning behaviour of the GPU ESN was tested on three problems, a sinusoid, a Mackey-Glass time-series, and a multiple superimposed oscillator (MSO). The normalised root-mean squared errors of the predictors were compared. The best observed sinusoid predictor outperformed the best MSO predictor by 4 orders of magnitude. In turn, the best observed MSO predictor outperformed the best Mackey-Glass predictor by 2 orders of magnitude.
|
9 |
High performance bioinformatics and computational biology on general-purpose graphics processing unitsLing, Cheng January 2012 (has links)
Bioinformatics and Computational Biology (BCB) is a relatively new multidisciplinary field which brings together many aspects of the fields of biology, computer science, statistics, and engineering. Bioinformatics extracts useful information from biological data and makes these more intuitive and understandable by applying principles of information sciences, while computational biology harnesses computational approaches and technologies to answer biological questions conveniently. Recent years have seen an explosion of the size of biological data at a rate which outpaces the rate of increases in the computational power of mainstream computer technologies, namely general purpose processors (GPPs). The aim of this thesis is to explore the use of off-the-shelf Graphics Processing Unit (GPU) technology in the high performance and efficient implementation of BCB applications in order to meet the demands of biological data increases at affordable cost. The thesis presents detailed design and implementations of GPU solutions for a number of BCB algorithms in two widely used BCB applications, namely biological sequence alignment and phylogenetic analysis. Biological sequence alignment can be used to determine the potential information about a newly discovered biological sequence from other well-known sequences through similarity comparison. On the other hand, phylogenetic analysis is concerned with the investigation of the evolution and relationships among organisms, and has many uses in the fields of system biology and comparative genomics. In molecular-based phylogenetic analysis, the relationship between species is estimated by inferring the common history of their genes and then phylogenetic trees are constructed to illustrate evolutionary relationships among genes and organisms. However, both biological sequence alignment and phylogenetic analysis are computationally expensive applications as their computing and memory requirements grow polynomially or even worse with the size of sequence databases. The thesis firstly presents a multi-threaded parallel design of the Smith- Waterman (SW) algorithm alongside an implementation on NVIDIA GPUs. A novel technique is put forward to solve the restriction on the length of the query sequence in previous GPU-based implementations of the SW algorithm. Based on this implementation, the difference between two main task parallelization approaches (Inter-task and Intra-task parallelization) is presented. The resulting GPU implementation matches the speed of existing GPU implementations while providing more flexibility, i.e. flexible length of sequences in real world applications. It also outperforms an equivalent GPPbased implementation by 15x-20x. After this, the thesis presents the first reported multi-threaded design and GPU implementation of the Gapped BLAST with Two-Hit method algorithm, which is widely used for aligning biological sequences heuristically. This achieved up to 3x speed-up improvements compared to the most optimised GPP implementations. The thesis then presents a multi-threaded design and GPU implementation of a Neighbor-Joining (NJ)-based method for phylogenetic tree construction and multiple sequence alignment (MSA). This achieves 8x-20x speed up compared to an equivalent GPP implementation based on the widely used ClustalW software. The NJ method however only gives one possible tree which strongly depends on the evolutionary model used. A more advanced method uses maximum likelihood (ML) for scoring phylogenies with Markov Chain Monte Carlo (MCMC)-based Bayesian inference. The latter was the subject of another multi-threaded design and GPU implementation presented in this thesis, which achieved 4x-8x speed up compared to an equivalent GPP implementation based on the widely used MrBayes software. Finally, the thesis presents a general evaluation of the designs and implementations achieved in this work as a step towards the evaluation of GPU technology in BCB computing, in the context of other computer technologies including GPPs and Field Programmable Gate Arrays (FPGA) technology.
|
10 |
Implementace neúplného inverzního rozkladu na grafických kartách / Implementing incomplete inverse decomposition on graphical processing unitsDědeček, Jan January 2013 (has links)
The goal of this Thesis was to evaluate a possibility to solve systems of linear algebraic equations with the help of graphical processing units (GPUs). While such solvers for generally dense systems seem to be more or less a part of standard production libraries, the Thesis concentrates on this low-level parallelization of equations with a sparse system that still presents a challenge. In particular, the Thesis considers a specific algorithm of an approximate inverse decomposition of symmetric and positive definite systems combined with the conjugate gradient method. An important part of this work is an innovative parallel implementation. The presented experimental results for systems of various sizes and sparsity structures point out that the approach is rather promising and should be further developed. Summarizing our results, efficient preconditioning of sparse systems by approximate inverses on GPUs seems to be worth of consideration. Powered by TCPDF (www.tcpdf.org)
|
Page generated in 0.1001 seconds