Spelling suggestions: "subject:"dataflow"" "subject:"thatallow""
91 |
Contributions to Software Runtime for Clustered Manycores Applied to Embedded and High-Performance Applications / Contributions aux environnements d’exécution pour processeurs massivement parallèles et clustérisés appliqués aux applications embarquées et hautes performancesHascoët, Julien 14 December 2018 (has links)
Le besoin en calculs est toujours plus important et difficile à satisfaire, spécialement dans le domaine de l’informatique embarquée qui inclue les voitures autonomes, drones et téléphones intelligents. Les systèmes embarqués doivent respecter des contraintes fortes de temps, de consommation et de sécurité. Les nouveaux processeurs parallèles et hétérogènes comme le MPPA® de Kalray utilisé dans cette thèse, doivent alors combiner haute performance et basse consommation. Pour cela, le MPPA® intègre 288 coeurs, regroupés en 18 clusters à mémoire locale partagée, un réseau sur puce et des moteurs DMA pour les communications. Ces processeurs sont difficiles à programmer, engendrant des coûts de développement importants. Cette thèse a pour objectif de simplifier leur programmation tout en optimisant les performances finales. Nous proposons pour cela AOS, une librairie de communication et synchronisation haute performance gérant les mémoires locales distribuées des processeurs clustérisés. La librairie atteint 70% de la crête matérielle pour des transferts supérieurs à 8 KB. Nous proposons plusieurs outils de développement basés sur AOS et des modèles de programmation flux-dedonnées pour accélérer le développement d’applications parallèles pour processeurs clustérisés, notamment OpenVX qui est un nouveau standard pour les applications de vision et les réseaux de neurones. Nous automatisons l’optimisation de l’application OpenVX en faisant du pré-chargement de données et en les fusionnants, pour éviter le mur de la bande passante mémoire externe. Les résultats montrent des facteurs d’accélération super linéaires. / The growing need for computing is more and more challenging, especially in the embedded system world with autonomous cars, drones, and smartphones. New highly parallel and heterogeneous processors emerge to answer this challenge. They operate in constrained environments with real-time requirements, reduced power consumption, and safety. Programming these new chips is a time-consuming and challenging task leading to huge software development costs. The Kalray MPPA® processor is a competitive example for low-power super-computing on a single chip. It integrates up to 288 VLIW cores grouped in 18 clusters, each fitted with shared local memory. These clusters are interconnected with a high-bandwidth network-on-chip, and DMA engines are used to communicate. This processor is used in this thesis for experimental results. We propose the AOS library enabling highperformance communications and synchronizations of distributed local memories on clustered manycores. AOS provides 70% of the peak hardware throughput for transfers larger than 8 KB. We propose tools for the implementation of static and dynamic dataflow programs based on AOS to accelerate the parallel application developments onto clustered manycores. We propose an implementation of OpenVX for clustered manycores on top of AOS. OpenVX is a standard based on dataflow for the development of computer vision and neural network computing. The proposed OpenVX implementation includes automatic optimizations like data prefetch to overlap communications and computations, or kernel fusion to avoid the main memory bandwidth bottleneck. Results show super-linear speedups.
|
92 |
Energy and Design Cost Efficiency for Streaming Applications on Systems-on-ChipZhu, Jun January 2009 (has links)
With the increasing capacity of today's integrated circuits, a number ofheterogeneous system-on-chip (SoC) architectures in embedded systemshave been proposed. In order to achieve energy and design cost efficientstreaming applications on these systems, new design space explorationframeworks and performance analysis approaches are required. Thisthesis considers three state-of-the-art SoCs architectures, i.e., themulti-processor SoCs (MPSoCs) with network-on-chip (NoC) communication,the hybrid CPU/FPGA architectures, and the run-time reconfigurable (RTR)FPGAs. The main topic of the author?s research is to model and capturethe application scheduling, architecture customization, and bufferdimensioning problems, according to the real-time requirement. Sincethese problems are NP-complete, heuristic algorithms and constraintprogramming solver are used to compute a solution.For NoC communication based MPSoCs, an approach to optimize thereal-time streaming applications with customized processorvoltage-frequency levels and memory sizes is presented. A multi-clockedsynchronous model of computation (MoC) framework is proposed inheterogeneous timing analysis and energy estimation. Using heuristicsearching (i.e., greedy and taboo search), the experiments show anenergy reduction (up to 21%) without any loss in application throughputcompared with an ad-hoc approach.On hybrid CPU/FPGA architectures, the buffer minimization scheduling ofreal-time streaming applications is addressed. Based on event models,the problem has been formalized decoratively as constraint basescheduling, and solved by public domain constraint solver Gecode.Compared with traditional PAPS method, the proposed method needssignificantly smaller buffers (2.4% of PAPS in the best case), whilehigh throughput guarantees can still be achieved.Furthermore, a novel compile-time analysis approach based on iterativetiming phases is proposed for run-time reconfigurations in adaptivereal-time streaming applications on RTR FPGAs. Finally, thereconfigurations analysis and design trade-offs analysis capabilities ofthe proposed framework have been exemplified with experiments on bothexample and industrial applications. / Andres
|
93 |
Improving Model-Based Software Synthesis: A Focus on Mathematical StructuresGoens Jokisch, Andres Wilhelm 14 May 2021 (has links)
Computer hardware keeps increasing in complexity. Software design needs to keep up with this. The right models and abstractions empower developers to leverage the novelties of modern hardware. This thesis deals primarily with Models of Computation, as a basis for software design, in a family of methods called software synthesis.
We focus on Kahn Process Networks and dataflow applications as abstractions, both for programming and for deriving an efficient execution on heterogeneous multicores. The latter we accomplish by exploring the design space of possible mappings of computation and data to hardware resources. Mapping algorithms are not at the center of this thesis, however. Instead, we examine the mathematical structure of the mapping
space, leveraging its inherent symmetries or geometric properties to improve mapping methods in general.
This thesis thoroughly explores the process of model-based design, aiming to go beyond the more established software synthesis on dataflow applications. We starting with the problem of assessing these methods through benchmarking, and go on to formally examine the general goals of benchmarks. In this context, we also consider the role modern machine learning methods play in benchmarking.
We explore different established semantics, stretching the limits of Kahn Process Networks. We also discuss novel models, like Reactors, which are designed to be a deterministic, adaptive model with time as a first-class citizen. By investigating abstractions and transformations in the Ohua language for implicit dataflow programming, we also focus on programmability.
The focus of the thesis is in the models and methods, but we evaluate them in diverse use-cases, generally centered around Cyber-Physical Systems. These include the 5G telecommunication standard, automotive and signal processing domains. We even go beyond embedded systems and discuss use-cases in GPU programming and microservice-based architectures.
|
94 |
Increasing Design Productivity for FPGAs Through IP Reuse and Meta-Data EncapsulationArnesen, Adam T. 17 March 2011 (has links) (PDF)
As Moore's law continues to progress, it is becoming increasingly difficult for hardware designers to fully utilize the increasing number of transistors available semiconductor devices including FPGAs. This design productivity gap must be addressed to allow designs to take full advantage of the increased logic density that results from rising transistor density. The reuse of previously developed and verified intellectual property (IP) is one approach that has claimed to narrow the design productivity gap. Reuse, however, has proved difficult to realize in practice because of the complexity of IP and the reluctance of designers to reuse IP that they do not understand. This thesis proposes to narrow the design productivity gap for FPGAs by simplifying the reuse problem by encapsulating IP with extra machine-readable information or meta-data. This meta-data simplifies reuse by providing a language independent format for composing complex systems, providing a parameter representation system, defining high-level data types for FPGA IP, and allowing arbitrary IP to be described as actors in the homogeneous synchronous dataflow model of computation.This work implements meta-data in XML and presents two XML schemas that enable reuse. A new XML schema known as CHREC XML is presented as well as extensions that enable IP-XACT to be used to describe FPGA dataflow IP. Two tools developed in this work are also presented that leverage meta-data to simplify reuse of arbitrary IP. These tools simplify structural composition of IP, allow designers to manipulate parameters, check and validate high-level data types, and automatically synthesize control circuitry for dataflow designs. Productivity improvements are also demonstrated by reusing IP to quickly compose software radio receivers.
|
95 |
Chipcflow - validação e implementação do modelo de partição e protocolo de comunicação no grafo a fluxo de dados dinâmico / Chipflow - gvalidation and implementation of the partition model and communication protocol in the dynamic data flow graphSouza Júnior, Francisco de 24 January 2011 (has links)
A ferramenta ChipCflow vem sendo desenvolvida nos últimos quatro anos, inicialmente a partir de um projeto de arquitetura a fluxo de dados dinâmico em hardware reconfigurável, mas agora como uma ferramenta de compilação. Ela tem como objetivo a execução de algoritmos por meio do modelo de arquitetura a fluxo de dados associado ao conceito de dispositivos parcialmente reconfiguráveis. Sua característica principal é acelerar o tempo de execução de programas escritos em Linguagem de Programação de Alto Nível (LPAN), do inglês, High Level Languages, em particular nas partes mais intensas de processamento. Isso é feito por meio da implementação dessas partes de código diretamente em hardware reconfigurável - utilizando a tecnologia Field-programmable Gate Array (FPGA) - aproveitando ao máximo o paralelismo considerado natural do modelo a fluxo de dados e as características do hardware parcialmente reconfigurável. Neste trabalho, o objetivo é a prova de conceito do processo de partição e do protocolo de comunicação entre as partições definidas a partir de um Grafo de Fluxo de Dados (GFD), para a execução direta em hardware reconfigurável utilizando Reconfiguração Parcial Dinâmica (RPD). Foi necessário elaborar um mecanismo de partição e protocolo de comunicação entre essas partições, uma vez que a RPD insere características tecnológicas limitantes não encontradas em hardwares reconfiguráveis mais tradicionais. O mecanismo criado se mostrou parcialmente adequado à prova de conceito, significando a possibilidade de se executar GFDs na plataforma parcialmente reconfigurável. Todavia, os tempos de reconfiguração inviabilizaram a proposta inicial de se utilizar RPD para diminuir o tempo de tag matching dos GFDs dinâmicos / The ChipCflow tool has been developed over the last four years, initially from an architectural design the flow of Dynamic Data in reconfigurable hardware, but now as a compilation tool. It aims to run algorithms using the model of the data flow architecture associated with the concept of partially reconfigurable devices. Its main feature is to accelerate the execution time of programs written in High Level Languages, particularly in the most intense processing. This is done by implementing those parts of code directly in reconfigurable hardware - using FPGA technology - leveraging the natural parallelism of the data flow model and characteristics of the partially reconfigurable hardware. In this work, the main goal is the proof of concept of the partition process and protocol communication between the partitions defined from Data Flow Graph for direct execution in reconfigurable hardware using Active Partial Reconfiguration. This required a mechanism to partition and a protocol for communication between these partitions, since the Active Partial Reconfiguration inserts technological features limiting not found in traditional reconfigurable hardware. The mechanism developed is show to be partially adequate to the proof of concept, meaning the ability to run Data Flow Graphs in a platform that is partially reconfigurable. However, the reconfiguration time inserts a great overhead into the execution time, which made the proposal of the use of Active Partial Reconfiguration to decrease the time matching Data Flow Graph unfeasible
|
96 |
Reconfigurable hardware acceleration of CNNs on FPGA-based smart cameras / Architectures reconfigurables pour l’accélération des CNNs. Applications sur cameras intelligentes à base de FPGAsAbdelouahab, Kamel 11 December 2018 (has links)
Les Réseaux de Neurones Convolutifs profonds (CNNs) ont connu un large succès au cours de la dernière décennie, devenant un standard de la vision par ordinateur. Ce succès s’est fait au détriment d’un large coût de calcul, où le déploiement des CNNs reste une tâche ardue surtout sous des contraintes de temps réel.Afin de rendre ce déploiement possible, la littérature exploite le parallélisme important de ces algorithmes, ce qui nécessite l’utilisation de plate-formes matérielles dédiées. Dans les environnements soumis à des contraintes de consommations énergétiques, tels que les nœuds des caméras intelligentes, les cœurs de traitement à base de FPGAs sont reconnus comme des solutions de choix pour accélérer les applications de vision par ordinateur. Ceci est d’autant plus vrai pour les CNNs, où les traitements se font naturellement sur un flot de données, rendant les architectures matérielles à base de FPGA d’autant plus pertinentes. Dans ce contexte, cette thèse aborde les problématiques liées à l’implémentation des CNNs sur FPGAs. En particulier, ces travaux visent à améliorer l’efficacité des implantations grâce à deux principales stratégies d’optimisation; la première explore le modèle et les paramètres des CNNs, tandis que la seconde se concentre sur les architectures matérielles adaptées au FPGA. / Deep Convolutional Neural Networks (CNNs) have become a de-facto standard in computer vision. This success came at the price of a high computational cost, making the implementation of CNNs, under real-time constraints, a challenging task.To address this challenge, the literature exploits the large amount of parallelism exhibited by these algorithms, motivating the use of dedicated hardware platforms. In power-constrained environments, such as smart camera nodes, FPGA-based processing cores are known to be adequate solutions in accelerating computer vision applications. This is especially true for CNN workloads, which have a streaming nature that suits well to reconfigurable hardware architectures.In this context, the following thesis addresses the problems of CNN mapping on FPGAs. In Particular, it aims at improving the efficiency of CNN implementations through two main optimization strategies; The first one focuses on the CNN model and parameters while the second one considers the hardware architecture and the fine-grain building blocks.
|
97 |
Semântica e uma ferramenta para o método SADTRibeiro, Adagenor Lobato January 1991 (has links)
A definição de requisitos tem sido reconhecida como uma das mais críticas e difíceis tarefas em engenharia de software. A necessidade de ferramentas de suporte é essencial. Nos dias de hoje, entre os vários métodos existentes para apoiar a fase de requisitos, destaca-se o SADT (Structured Analysis and Design Techniques) devido a sua capacidade de representar modelos. Este trabalho estabelece semântica para o método SADT, baseando-se na inter-relação do método aos sistemas de fluxo de dados (redes, grafos e máquinas de fluxo). Faz-se, inicialmente, uma abordagem operacional para a semântica de seus construtos básicos e, posteriormente discute-se a possibilidade de executar especificações através de simulação. Uma ferramenta para suportar o método SADT foi projetada e construída e é apresentada. Ela foi definida a partir de um modelo, denotado por uma classe, através de uma sintaxe abstrata. Essa ferramenta foi implementada no ambiente PROSOFT, fornecendo para o usuário mais de quarenta operações de apoio a construção/manipulação de diagramas. O trabalho também apresenta a especificação formal em VDM - Vienna Development Method, da semântica dos principais construtos do método SADT, bem como uma proposição de execução de especificações através de simulação são ainda indicadas direções nas quais o trabalho pode ser estendido. / The definition of systems requirements has been known as one of the most critical and dificult tasks as far as the software engineering is concerned. The need support is essential. Nowadays, among the various methods devised to support the phase of requirements, a special emphasis is given to the SADT method (Structured Analysis and Design Techniques), due to its capability of representing models. This work set semantic for the SADT method, based primarily upon the interrelation of the method to the systems of dataflow (nets, graphs and dataflow machines). It deals with an approach of operational semantics to its basic constructs, and it will, afterwards, discuss the possibility of carry out specifications by simulation. A tool was built to support the SADT method, and it was defined by a model denoted by a class, through an abstract syntax. This tool was implemented in the PROSOFT environment, providing for the user, more than forty support operations for the construction /manipulation of diagrams. This work also presents the formal specification of the semantics of the main constructs of the SADT method in VDM - Vienna Development Method; as well as an execution proposal of specifications through simulation. Directions have been indicated concerning the extension of the research.
|
98 |
Manipulation des données XML par des utilisateurs non-expertsTekli, Gilbert 04 October 2011 (has links) (PDF)
Aujourd'hui, les ordinateurs et l'Internet sont partout dans le monde : dans chaque maison, domaine et plateforme. Dans ce contexte, le standard XML s'est établi comme un moyen insigne pour la représentation et l'échange efficaces des données. Les communications et les échanges d'informations entre utilisateurs, applications et systèmes d'information hétérogènes sont désormais réalisés moyennant XML afin de garantir l'interopérabilité des données. Le codage simple et robuste de XML, à base de données textuelles semi-structurées, a fait que ce standard a rapidement envahi les communications medias. Ces communications sont devenues inter-domaines, partant de l'informatique et s'intégrant dans les domaines médical, commercial, et social, etc. Par conséquent, et au vu du niveau croissant des données XML flottantes entre des utilisateurs non-experts (employés, scientifiques, etc.), que ce soit sur les messageries instantanées, réseaux sociaux, stockage de données ou autres, il devient incontournable de permettre aux utilisateurs non-experts de manipuler et contrôler leurs données (e.g., des parents qui souhaitent appliquer du contrôle parental sur les messageries instantanées de leur maison, un journaliste qui désire regrouper et filtrer des informations provenant de différents flux RSS, etc.). L'objectif principal de cette thèse est l'étude des manipulations des données XML par des utilisateurs non-experts. Quatre principales catégories ont été identifiées dans la littérature : i) les langages visuels orientés XML, ii) les Mashups, iii) les techniques de manipulation des données XML, et iv) les DFVPL (langages de programmation visuel à base de Dataflow), couvrant différentes pistes. Cependant, aucune d'entre elles ne fournit une solution complète. Dans ce travail de recherche, nous avons formellement défini un Framework de manipulation XML, intitulé XA2C (XML-oriented mAnipulAtion Compositions). XA2C représente un environnement de programmation visuel (e.g., Visual-Studio) pour un DFVPL orienté XML, intitulé XCDL (XML-oriented Composition Definition Language) qui constitue la contribution majeure de cette thèse. XCDL, basé sur les réseaux de Pétri colorés, permet aux non-experts de définir, d'arranger et de composer des opérations de manipulation orientées XML. Ces opérations peuvent être des simples sélections/projections de données, ainsi que des opérations plus complexes de modifications de données (insertion, suppression, tatouage, etc.). Le langage proposé traite les données XML à base de documents ou de fragments. En plus de la définition formelle (syntaxique et sémantique) du langage XCDL, XA2C introduit une architecture complète à base d'un compilateur et un environnement d'exécution dédiés. Afin de tester et d'évaluer notre approche théorique, nous avons développé un prototype, intitulé X-Man, avec un Framework d'évaluation pour les langages et outils visuels de programmation orientés XML. Une série d'études de cas et d'expérimentations a été réalisée afin d'évaluer la qualité d'usage de notre langage, et de le comparer aux solutions existantes. Les résultats obtenus soulignent la supériorité de note approche, notamment en termes de qualité d'interaction, de visualisation, et d'utilisation. Plusieurs pistes sont en cours d'exploration, telles que l'intégration des opérations plus complexes (opérateurs de contrôle, boucles, etc.), les compositions automatiques, et l'extension du langage pour gérer la spécificité des formats dérivés du standard XML (flux RSS, RDF, SMIL, etc.)
|
99 |
Contribution to the management of large scale platforms: the Diet experienceCaron, Eddy 06 October 2010 (has links) (PDF)
10 ans. 10 ans de recherches autour du calcul haute performance dans des environnements distribués. Et tout au long de ces années, le développement d'un intergiciel appelé DIET comme liant de ces recherches. Aujourd'hui la naissance d'une start'up autour de DIET, offre à cet intergiciel une autre vie. Ce tournant me donne alors l'occasion de proposer une vision que j'espère complète de cette aventure. A travers l'expérience de DIET, il sera intéressant d'évoquer les problèmes de recherche inhérents au développement complet d'un intergiciel de grille et de Cloud pour le calcul haute performance. Les aspects d'interoperabilités seront tout d'abord évoqués au travers des efforts de standardisation du GridRPC, et nous verrons comment DIET répond au problème de la localisation de ressources. Le problème de l'extensibilité sera ensuite traité au travers de l'architecture proposée. Nous verrons ensuite la réponse faite pour la découverte de services qui partant d'un besoin de notre intergiciel débouchera sur une solution générique. Ces premiers travaux évoqués se focalise côté client. Côté serveur nous évoquerons la solution mise en place pour la gestion des ressources. L'étape suivante sera de s'intéresser au déploiement et à la planification de ce déploiement. Conformément à notre objectif initial de fournir un outil complet, nous aborderons ensuite les problèmes liés à la gestion de données. Nous mettrons ensuite en lumière un des points forts de DIET qui est la réponse de ce dernier aux problèmes d'ordonnancement sur des environnements hétérogènes. Ce qui nous conduira jusqu'à la gestion des workflows dans notre intergiciel. Enfin pour conclure je vous présenterai différents cas d'utilisation de DIET sur des applications réelles et variées dont la plateforme du projet Décrypthon qui utilise notre intergiciel dans un cadre de production.
|
100 |
RETHROTTLE : Execution Throttling In The REDEFINE SoC ArchitectureSatrawala, Amar Nath 06 1900 (has links)
REDEFINE is a reconfigurable SoC architecture that provides a unique platform for high performance and low power computing by exploiting the synergistic interaction between coarse grain dynamic dataflow model of computation (to expose abundant parallelism in the applications) and runtime composition of efficient compute structures (on the reconfigurable computation resources). Computer architectures based on the dynamic dataflow model of computation have to be an infinite resource implementation to be able to exploit all available parallelism in all applications. It is not feasible for any real architectural implementation. When limited resource implementations are considered, there is a possibility of loss of performance (inability to efficiently exploit available parallelism). In this thesis, we study the throttling of execution in the REDEFINE architecture to maximize the architecture efficiency. We have formulated it as a design space exploration problem at two levels i.e. architectural configurations and throttling schemes.
Reduced feature/high level simulation or feature specific analytical approaches are very useful for the selective study/exploration of early in design phase architectures/systems. Our approach is similar to that of SEASAME Framework which is used for the study of MPSoC (Multiprocessor SoC) architectures. We have used abstraction (feature reduction) at the levels of architecture and model of computation to make the problem approachable and practically feasible. A feature specific fast hybrid (mixed level) simulation framework for the early in design phase study is developed and implemented for the huge design space exploration (1284 throttling schemes, 128 architectural configurations and 10 applications i.e. 1.6 million executions).
We have done performance modeling in terms of selection of important performance criteria, ranking of the explored throttling schemes and investigation of the effectiveness of the design space exploration using statistical hypothesis testing. We found some interesting obvious/intuitive and some non-obvious/counterintuitive results. The two performance criteria namely Exec.T and Avg.TU were found sufficient to represent the performance and the resource usage characteristics of the architecture independent of the throttling schemes, the architectural configurations and the applications. The ranking of the throttling schemes based on the selected performance criteria is found to be statistically very significant. The intuitive throttling schemes span the range of performance from the best to the worst. We found absence of trade-off amongst all of the performance criteria. The best throttling schemes give appreciable overall performance (25%) and resource usage (37%) gains in the throttling unit simultaneously. The design space exploration of the throttling schemes is found to be fine and uniform.
|
Page generated in 0.0347 seconds