• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 165
  • 73
  • 45
  • 20
  • 18
  • 12
  • 4
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 396
  • 78
  • 73
  • 72
  • 70
  • 59
  • 57
  • 50
  • 38
  • 37
  • 36
  • 35
  • 34
  • 34
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Stream Computing on FPGAs

Plavec, Franjo 01 September 2010 (has links)
Field Programmable Gate Arrays (FPGAs) are programmable logic devices used for the implementation of a wide range of digital systems. In recent years, there has been an increasing interest in design methodologies that allow high-level design descriptions to be automatically implemented in FPGAs. This thesis describes the design and implementation of a novel compilation flow that implements circuits in FPGAs from a streaming programming language. The streaming language supported is called FPGA Brook, and is based on the existing Brook and GPU Brook languages, which target streaming multiprocessors and graphics processing units (GPUs), respectively. A streaming language is suitable for targeting FPGAs because it allows system designers to express applications in a way that exposes parallelism, which can then be exploited through parallel hardware implementation. FPGA Brook supports replication, which allows the system designer to trade-off area for performance, by specifying the parts of an application that should be implemented as multiple hardware units operating in parallel, to achieve desired application throughput. Hardware units are interconnected through FIFO buffers, which effectively utilize the small memory modules available in FPGAs. The FPGA Brook design flow uses a source-to-source compiler, and combines it with a commercial behavioural synthesis tool to generate hardware. The source-to-source compiler was developed as a part of this thesis and includes novel algorithms for implementation of complex reductions in FPGAs. The design flow is fully automated and presents a user-interface similar to traditional software compilers. A suite of benchmark applications was developed in FPGA Brook and implemented using our design flow. Experimental results show that applications implemented using our flow achieve much higher throughput than the Nios II soft processor implemented in the same FPGA device. Comparison to the commercial C2H compiler from Altera shows that while simple applications can be effectively implemented using the C2H compiler, complex applications achieve significantly better throughput when implemented by our system. Performance of many applications implemented using our design flow would scale further if a larger FPGA device were used. The thesis demonstrates that using an automated design flow to implement streaming applications in FPGAs is a promising methodology.
202

Modèles de programmation des applications de traitement du signal et de l'image sur cluster parallèle et hétérogène / Programming models for signal and image processing on parallel and heterogeneous architectures

Mansouri, Farouk 14 October 2015 (has links)
Depuis une dizaine d'année, l'évolution des machines de calcul tend vers des architectures parallèles et hétérogènes. Composées de plusieurs nœuds connectés via un réseau incluant chacun des unités de traitement hétérogènes, ces grilles offrent de grandes performances. Pour programmer ces architectures, l'utilisateur doit s'appuyer sur des modèles de programmation comme MPI, OpenMP, CUDA. Toutefois, il est toujours difficile d'obtenir à la fois une bonne productivité du programmeur, qui passe par une abstraction des spécificités de l'architecture et performances. Dans cette thèse, nous proposons d'exploiter l'idée qu'un modèle de programmation spécifique à un domaine applicatif particulier permet de concilier ces deux objectifs antagonistes. En effet, en caractérisant une famille d'applications, il est possible d'identifier des abstractions de haut niveau permettant de les modéliser. Nous proposons deux modèles spécifiques au traitement du signal et de l'image sur cluster hétérogène. Le premier modèle est statique. Nous lui apportons une fonctionnalité de migration de tâches. Le second est dynamique, basé sur le support exécutif StarPU. Les deux modèles offrent d'une part un haut niveau d'abstraction en modélisant les applications de traitement du signal et de l'image sous forme de graphe de flot de données et d'autre part, ils permettent d'exploiter efficacement les différents niveaux de parallélisme tâche, données, graphe. Ces deux modèles sont validés par plusieurs implémentations et comparaisons incluant deux applications de traitement de l'image du monde réel sur cluster CPU-GPU. / Since a decade, computing systems evolved to parallel and heterogeneous architectures. Composed of several nodes connected via a network and including heterogeneous processing units, clusters achieve high performances. To program these architectures, the user must rely on programming models such as MPI, OpenMP or CUDA. However, it is still difficult to conciliate productivity provided by abstracting the architectural specificities, and performances. In this thesis, we exploit the idea that a programming model specific to a particular domain of application can achieve these antagonist goals. In fact, by characterizing a family of application, it is possible to identify high level abstractions to efficiently model them. We propose two models specific to the implementation of signal and image processing applications on heterogeneous clusters. The first model is static. We enrich it with a task migration feature. The second model is dynamic, based on the StarPU runtime. Both models offer firstly a high level of abstraction by modeling image and signal applications as a data flow graph and secondly they efficiently exploit task, data and graph parallelisms. We validate these models with different implementations and comparisons including two real-world applications of images processing on a CPU-GPU cluster.
203

Výtvarný a hudební minimalismus / Minimal art and minimal music

Šedinová, Pavla January 2011 (has links)
Pavla Šedinová: Minimal art and minimal music Abstract: This thesis focuses on the theoretical description of minimalism in the fine arts and music. It follows the progression of minimisation in art in the first half of the twentieth century, which culminates in minimal art and minimal music, and compares its principles with multiplication in the work of Andy Warhol and Velvet Underground. At the same time the paper discusses the difference between American minimalism and its European and Czech parallels. The aim of this thesis is describing main concepts of minimalism that could be relevant for teaching purposes. The pedagogical part of the thesis focuses on the manner in which the obtained information about the fine arts and music could be utilised in teaching, particularly in Elementary Art Schools. The practical part demonstrates my own approach to minimalist principles.
204

Parallelisierung von Algorithmen zur Nutzung auf Architekturen mit Teilwortparallelität

Schaffer, Rainer 09 March 2010 (has links)
Der technologische Fortschritt gestattet die Implementierung zunehmend komplexerer Prozessorarchitekturen auf einem Schaltkreis. Ein Trend der letzten Jahre ist die Implementierung von mehr und mehr Verarbeitungseinheiten auf einem Chip. Daraus ergeben sich neue Herausforderungen für die Abbildung von Algorithmen auf solche Architekturen, denn alle Verarbeitungseinheiten sollen effizient bei der Ausführung des Algorithmus genutzt werden. Der Schwerpunkt der eingereichten Dissertation ist die Ausnutzung der Parallelität von Rechenfeldern mit Teilwortparallelität. Solche Architekturen erlauben Parallelverarbeitung auf mehreren Ebenen. Daher wurde eine Abbildungsstrategie, mit besonderem Schwerpunkt auf Teilwortparallelität entwickelt. Diese Abbildungsstrategie basiert auf den Methoden des Rechenfeldentwurfs. Rechenfelder sind regelmäßig angeordnete Prozessorelemente, die nur mit ihren Nachbarelementen kommunizieren. Die Datenein- und -ausgabe wird durch die Prozessorelemente am Rand des Rechenfeldes realisiert. Jedes Prozessorelement kann mehrere Funktionseinheiten besitzen, welche die Rechenoperationen des Algorithmus ausführen. Die Teilwortparallelität bezeichnet die Fähigkeit zur Teilung des Datenpfads der Funktionseinheit in mehrere schmale Datenpfade für die parallele Ausführung von Daten mit geringer Wortbreite. Die entwickelte Abbildungsstrategie unterteilt sich in zwei Schritte, die \"Vorverarbeitung\" und die \"Mehrstufige Modifizierte Copartitionierung\" (kurz: MMC). Die \"Vorverarbeitung\" verändert den Algorithmus in einer solchen Art, dass der veränderte Algorithmus schnell und effizient auf die Zielarchitektur abgebildet werden kann. Hierfür wurde ein Optimierungsproblem entwickelt, welches schrittweise die Parameter für die Transformation des Algorithmus bestimmt. Die \"Mehrstufige Modifizierte Copartitionierung\" wird für die schrittweise Anpassung des Algorithmus an die Zielarchitektur eingesetzt. Darüber hinaus ermöglicht die Abbildungsmethode die Ausnutzung der lokalen Register in den Prozessorelementen und die Anpassung des Algorithmus an die Speicherarchitektur, an die das Rechenfeld angebunden ist. Die erste Stufe der MMC dient der Transformation eines Algorithmus mit Einzeldatenoperationen in einen Algorithmus mit teilwortparallelen Operationen. Mit der zweiten Copartitionierungsstufe wird der Algorithmus an die lokalen Register und an das Rechenfeld angepasst. Weitere Copartitionierungsstufen können zur Anpassung des Algorithmus an die Speicherarchitektur verwendet werden. / The technological progress allows the implementation of complex processor architectures on a chip. One trend of the last years is the implemenation of more and more execution units on one chip. That implies new challenges for the mapping of algorithms on such architectures, because the execution units should be used efficiently during the execution of the algorithm. The focus of the submitted dissertation thesis is the utilization of the parallelism of processor arrays with subword parallelism. Such architectures allow parallel executions on different levels. Therefore an algorithm mapping strategy was developed, where the exploitation of the subword parallelism was in the focus. This algorithm mapping strategy is based on the methods of the processor array design. Processor arrays are regular arranged processor elements, which communicate with their neighbors elements only. The data in- and output will be realized by the processor elements on the border of the array. Each processor element can have several functional units, which execute the computational operations. Subword parallelism means the capability for splitting the data path of the functional units in several smaller chunks for the parallel execution of data with lower word width. The developed mapping strategy is subdivided in two steps, the \"Preprocessing\" and the \"Multi-Level Modified Copartitioning\" (kurz: MMC), whereat the MMC means the method of the step simultaneously. The \"Preprocessing\" alter the algorithm in such a kind, that the altered algorithm can be fast and efficient mapped on the target architecture. Therefore an optimization problem was developed, which determines gradual the parameter for the transformation of the algorithm. The \"Multi-Level Modified Copartitioning\" is used for mapping the algorithm gradual on the target architecture. Furthermore the mapping methodology allows the exploitation of the local registers in the processing elements and the adaptation of the algorithm on the memory architecture, where the processing array is connected on. The first level of the MMC is used for the transformation of an algorithm with operation based on single data to an algorithm with subword parallel operations. With the second level, the algorithm will be adapted to the local registers in the processing elements and to the processor array. Further copartition levels can be used for matching the algorithm to the memory architecture.
205

Parallel Query Systems : Demand-Driven Incremental Compilers / En arkitektur för parallella och inkrementella kompilatorer

Nolander, Christofer January 2023 (has links)
Query systems were recently introduced as an architecture for constructing compilers, and have shown to enable fast and efficient incremental compilation, where results from previous builds is reused to accelerate future builds. With this architecture, a compiler is composed of several queries, each of which extracts a small piece of information about the source program. For example, one query might determine the type of a variable, and another the list of functions defined in some file. The dependencies of a query, which includes other queries or files on disk, are automatically recorded at runtime. With these dependencies, query systems can detect changes in their inputs and incorporate them into the final output, while reusing old results from queries which have not changed. This reduces the amount of work needed to recompile code, which saves both time and energy. We present a new parallel execution model for query systems using work-stealing, which dynamically balances the workload across multiple threads. This is facilitated by various augmentations to existing algorithms to allow concurrent operations. Furthermore, we introduce a novel data structure that accelerates incremental compilation for common use cases. We evaluated the impact of these augmentations by implementing a compiler frontend capable of parsing and type-checking the Go programming language. We demonstrate a 10x reduction in compile times using the parallel execution mode. Finally, under certain common conditions, we show a 5x reduction in incremental compile times compared to the state-of-the-art. / Query-system är en ny arkitektur som har använts för att implementera kompilatorer för programspråk och har ett fokus på att möjliggöra snabb och effektiv inkrementell kompilering. Med denna arkitektur består en kompilator flera olika mindre funktioner, som var och en svarar på en liten fråga om källprogrammet, såsom typen av en variabel eller listan över funktioner i en fil. Genom att spåra hur dessa funktioner anropar varandra, och den data de läser, kan kompilatorer upptäcka förändringar i sina indata och utföra den minimala mängd arbete som krävs för att sammanställa dessa förändringar i utdata. Detta minskar mängden arbete som behövs för att kompilera om kod, vilket sparar både tid och energi. I denna rapport presenterar vi en ny exekveringsmodell för Query-system som möjliggör parallellism med hjälp av work-stealing. Detta underlättas av flera tillägg till befintliga algoritmer som gör det möjligt att utföra alla operationer parallellt. Utöver detta introducerar vi även en ny datastruktur som gör inkrementell kompilering snabbare för många vanliga användningsområden. Vi utvärderade effekten av dessa förändringar genom att implementera ett kompilatorgränssnitt som kan analysera och verifiera korrekthet av typer Go-programmeringsspråket. Resultaten visar en 10x reduktion i kompileringstider med hjälp av parallellkörningsläget. Vi demonstrerar även 5 gånger lägre kompileringstider vid inkrementella ändringar än vad som tidigare varit möjligt.
206

SiLago: Enabling System Level Automation Methodology to Design Custom High-Performance Computing Platforms : Toward Next Generation Hardware Synthesis Methodologies

Farahini, Nasim January 2016 (has links)
<p>QC 20160428</p>
207

Development of high performance hardware architectures for multimedia applications

Khan, Shafqat 29 September 2010 (has links) (PDF)
Les besoins en puissance de calcul des processeurs sont en constante augmentation en raison de l'importance croissante des applications multimédia dans la vie courante. Ces applications requièrent de nombreux calculs avec des données de faible précision généralement issues des pixels. Le moyen le plus efficace pour exploiter le parallélisme de données de ces applications est le parallélisme dit de sous-mots (SWP pour \textit{subword parallelism}). Les opérations sont effectuées en parallèle sur des données de faible précision regroupées ce qui permet d'utiliser au mieux les ressources disponibles dimensionnées pour traiter des mots. Dans cette thèse, la conception de différents opérateurs SWP pour les applications multimédia est proposée. Une bonne adéquation entre largeur des sous-mots et largeur des données manipulées permet une meilleure utilisation des ressources et conduit ainsi à améliorer l'efficacité de l'exécution de l'application sur le processeur. Les opérateurs arithmétiques de base développés sont ensuite utilisés dans un opérateur SWP reconfigurable. Ce dernier peut être configuré pour effectuer diverses opérations multimédia avec différentes largeurs de données. L'opérateur reconfigurable peut être utilisé comme unité spécialisée ou comme co-processeur dans un processeur multimédia afin d'en améliorer les performances. La vitesse interne des différentes unités de traitement est également améliorée en représentant les nombres en système redondant plutôt qu'en système binaire. Le système redondant permet entre autre d'augmenter la vitesse des opérations arithmétiques en évitant une propagation de retenue couteuse lors d'opérations d'addition. Les résultats obtenus montrent l'intérêt en terme de performances d'utiliser des opérateurs SWP lors de l'exécution d'applications multimédia.
208

Akcelerace adversariálních algoritmů s využití grafického procesoru / GPU Accelerated Adversarial Search

Brehovský, Martin January 2011 (has links)
General purpose graphical processing units were proven to be useful for accelerating computationally intensive algorithms. Their capability to perform massive parallel computing significantly improve performance of many algorithms. This thesis focuses on using graphical processors (GPUs) to accelerate algorithms based on adversarial search. We investigate whether or not the adversarial algorithms are suitable for single instruction multiple data (SIMD) type of parallelism, which GPU provides. Therefore, parallel versions of selected algorithms accelerated by GPU were implemented and compared with the algorithms running on CPU. Obtained results show significant speed improvement and proof the applicability of GPU technology in the domain of adversarial search algorithms.
209

Redistribution dynamique parallèle efficace de la charge pour les problèmes numériques de très grande taille / Efficient parallel dynamic load balancing for very large numerical problems

Fourestier, Sébastien 20 June 2013 (has links)
Cette thèse traite du problème de la redistribution dynamique parallèle efficace de la charge pour les problèmes numériques de très grande taille. Nous présentons tout d'abord un état de l'art des algorithmes permettant de résoudre les problèmes du partitionnement, du repartitionnement, du placement statique et du re-placement. Notre première contribution vise à étudier, dans un cadre séquentiel, les caractéristiques algorithmiques souhaitables pour les méthodes parallèles de repartitionnement. Nous y présentons notre contribution à la conception d'un schéma multi-niveaux k-aire pour le calcul sequentiel de repartitionnements. La partie la plus exigeante de cette adaptation concerne la phase d'expansion. L'une de nos contributions majeures a été de nous inspirer des méthodes d'influence afin d'adapter un algorithme de raffinement par diffusion au problème du repartitionnement.Notre deuxième contribution porte sur la mise en oeuvre de ces méthodes sur machines parallèles. L'adaptation du schéma multi-niveaux parallèle a nécessité une évolution des algorithmes et des structures de données mises en oeuvre pour le partitionnement. Ce travail est accompagné d'une analyse expérimentale, qui est rendue possible grâce à la mise en oeuvre des algorithmes considérés au sein de la bibliothèque Scotch. / This thesis concerns efficient parallel dynamic load balancing for large scale numerical problems. First, we present a state of the art of the algorithms used to solve the partitioning, repartitioning, mapping and remapping problems. Our first contribution, in the context of sequential processing, is to define the desirable features that parallel repartitioning tools need to possess. We present our contribution to the conception of a k-way multilevel framework for sequential repartitioning. The most challenging part of this work regards the uncoarsening phase. One of our main contributions is the adaptation of influence methods to a global diffusion-based heuristic for the repartitioning problem. Our second contribution is the parallelization of these methods. The adaptation of the aforementioned algorithms required some modification of the algorithms and data structure used by existing parallel partitioning routines. This work is backed by a thorough experimental analysis, which is made possible thanks to the implementation of our algorithms into the Scotch library.
210

Vysoce výkonné analýzy / High Performance Analytics

Kalický, Andrej January 2013 (has links)
This thesis explains Big Data Phenomenon, which is characterised by rapid growth of volume, variety and velocity of data - information assets, and thrives the paradigm shift in analytical data processing. Thesis aims to provide summary and overview with complete and consistent image about the area of High Performance Analytics (HPA), including problems and challenges on the pioneering state-of-art of advanced analytics. Overview of HPA introduces classification, characteristics and advantages of specific HPA method utilising the various combination of system resources. In the practical part of the thesis the experimental assignment focuses on analytical processing of large dataset using analytical platform from SAS Institute. The experiment demonstrates the convenience and benefits of In-Memory Analytics (specific HPA method) by evaluating the performance of different analytical scenarios and operations. Powered by TCPDF (www.tcpdf.org)

Page generated in 0.0554 seconds