Spelling suggestions: "subject:"reconfigurable hardware"" "subject:"econfigurable hardware""
41 |
Scheduling Tasks on Heterogeneous Chip Multiprocessors with Reconfigurable HardwareTeller, Justin Stevenson 31 July 2008 (has links)
No description available.
|
42 |
An implementation of the parallelism, distribution and nondeterminism of membrane computing models on reconfigurable hardwareNguyen, Van-Tuong January 2010 (has links)
Membrane computing investigates models of computation inspired by certain features of biological cells, especially features arising because of the presence of membranes. Because of their inherent large-scale parallelism, membrane computing models (called P systems) can be fully exploited only through the use of a parallel computing platform. However, it is an open question whether it is feasible to develop an efficient and useful parallel computing platform for membrane computing applications. Such a computing platform would significantly outperform equivalent sequential computing platforms while still achieving acceptable scalability, flexibility and extensibility. To move closer to an answer to this question, I have investigated a novel approach to the development of a parallel computing platform for membrane computing applications that has the potential to deliver a good balance between performance, flexibility, scalability and extensibility. This approach involves the use of reconfigurable hardware and an intelligent software component that is able to configure the hardware to suit the specific properties of the P system to be executed. As part of my investigations, I have created a prototype computing platform called Reconfig-P based on the proposed development approach. Reconfig-P is the only existing computing platform for membrane computing applications able to support both system-level and region-level parallelism. Using an intelligent hardware source code generator called P Builder, Reconfig-P is able to realise an input P system as a hardware circuit in various ways, depending on which aspects of P systems the user wishes to emphasise at the implementation level. For example, Reconfig-P can realise a P system in a rule-oriented manner or in a region-oriented manner. P Builder provides a unified implementation framework within which the various implementation strategies can be supported. The basic principles of this framework conform to a novel design pattern called Content-Form-Strategy. The framework seamlessly integrates the currently supported implementation approaches, and facilitates the inclusion of additional implementation strategies and additional P system features. Theoretical and empirical results regarding the execution time performance and hardware resource consumption of Reconfig-P suggest that the proposed development approach is a viable means of attaining a good balance between performance, scalability, flexibility and extensibility. Most of the existing computing platforms for membrane computing applications fail to support nondeterministic object distribution, a key aspect of P systems that presents several interesting implementation challenges. I have devised an efficient algorithm for nondeterministic object distribution that is suitable for implementation in hardware. Experimental results suggest that this algorithm could be incorporated into Reconfig-P without too significantly reducing its performance or efficiency. / Thesis (PhDInformationTechnology)--University of South Australia, 2010
|
43 |
Arcabouço conceitual para computação reconfigurávelMolinos, Diego Nunes 07 February 2014 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The computing has over the years directing a radical change in the professional prole
and personal of their users. In recent years can be seen, a growing increase of computing
use as an auxiliary tool to solve problems. Problems that are increasingly common in
dierent areas of knowledge.
When the requirements of an application exceeds the capacity of the used solutions,
new ways of solutions are developed to satisfy the demands of complexity. The reconfigurable computing has emerged as a computational solution model that integrate the
xed hardware performance together with the software exibility, uniting the best of both
paradigms.
The reconfigurable computing is a eld relatively new and promising, where the main
concepts and components that were present since its theoretical basis, still stands as the
basis for the evolution of knowledge inside the area. Some of these concepts are older
than other and those newer ones that arise due to the need for better understanding of
the study eld.
Currently has been noticed in the published articles that some concepts involving
reconfigurable computing eld are being applied wrongly, on in other occasions, without
exploit all their features. This lack of clarity in the use of concepts, aect the development
of the study eld and contribute to the impoverishment of the area, aecting especially
students and researchers in early stages of learning, that seeking through those articles a
theoretical consistency.
Indeed, a conceptual discussion within of any study eld, always has a significant
importance for the any area. The conceptual framework proposed in this paper, aims
to identify and present the conceptual denitions involving the recongurable computing
eld, as well as their conceptual relationships. Within this framework we propose a
organization model of concepts for recongurable computing, a concept map and all of the
information is validated among a opinion consensus of several recongurable computing
specialists.
Moreover, this framework is intended to serve as a helper tool to the learning of
recongurable computing, aiding in some methodological requirements as well as the
increase of theoretical knowledge. / A computação vem ao longo dos anos direcionando uma mudança radical no perfil profissional e pessoal de seus usuários. Nos últimos anos pode ser observado um crescente aumento de sua utilização como ferramenta auxiliar para resolver problemas. Problemas
que são cada vez mais frequentes, nas diferentes áreas do conhecimento.
Quando os requisitos de uma aplicação excedem a capacidade das soluções utilizadas,
novos modelos de soluções são desenvolvidos para atender a demanda de complexidade. A
computação reconfigurável surgiu como um modelo de solução computacional que íntegra
o desempenho do hardware fixo com a flexibilidade do software, unindo o melhor dos dois
paradigmas.
A computação reconfigurável uma área relativamente nova e promissora, onde os principais
conceitos e componentes que estiveram presentes desde a sua fundamentação teórica, ainda se mantém como base para a evolução do conhecimento na área. Alguns destes
conceitos são mais antigos e outros mais recentes, que surgem em razão da necessidade
de uma melhor compreensão do campo de estudo.
Atualmente tem-se observado que alguns conceitos que envolvem a computação reconfigurável vem sendo aplicados de forma errônea, em outras ocasiões, não explorando todas
suas características. Essa falta de clareza na utilização dos conceitos prejudica a evolução do campo de estudo, contribuindo para o empobrecimento da área, principalmente
para os alunos e pesquisadores em fase inicial de aprendizado, que buscam através desses
trabalhos a consistência teórica.
De fato uma discussão conceitual dentro de qualquer campo de estudo, sempre apresenta
importância significativa para a área de estudo. dessa forma o arcabouço conceitual
proposto neste trabalho, objetiva identificar e apresentar as definições conceituais que
envolvem o campo da computação reconfigurável, bem como suas relações. Dentro deste
arcabouço é proposto um modelo organizacional dos conceitos para a computação reconfigurável, um mapa conceitual, onde todas as informações são validadas através de consenso
de opinião de diversos especialistas da área.
Ademais, esse arcabouço tem por finalidade servir de ferramenta auxiliar para o aprendizado
da computação reconfigurável, auxiliando em algumas definições metodologicas de
pesquisa bem como o acréscimo de conhecimento teórico. / Mestre em Ciência da Computação
|
44 |
Compiling For Coarse-Grained Reconfigurable Architectures Based On Dataflow Execution ParadigmAlle, Mythri 12 1900 (has links) (PDF)
Coarse-Grained Reconfigurable Architectures(CGRAs) can be employed for accelerating computational workloads that demand both flexibility and performance. CGRAs comprise a set of computation elements interconnected using a network and this interconnection of computation elements is referred to as a reconfigurable fabric. The size of application that can be accommodated on the reconfigurable fabric is limited by the size of instruction buffers associated with each Compute element. When an application cannot be accommodated entirely, application is partitioned such that each of these partitions can be executed on the reconfigurable fabric. These partitions are scheduled by an orchestrator. The orchestrator employs dynamic dataflow execution paradigm. Dynamic dataflow execution paradigm has inherent support for synchronization and helps in exploitation of parallelism that exists across application partitions. In this thesis, we present a compiler that targets such CGRAs.
The compiler presented in this thesis is capable of accepting applications specified in C89 standard. To enable architectural design space exploration, the compiler is designed such that it can be customized for several instances of CGRAs employing dataflow execution paradigm at the orchestrator. This can be achieved by specifying the appropriate configuration parameters to the compiler. The focus of this thesis is to provide efficient support for various kinds of parallelism while ensuring correctness. The compiler is designed to support fine-grained task level parallelism that exists across iterations of loops and function calls. Additionally, compiler can also support pipeline parallelism, where a loop is split into multiple stages that execute in a pipelined manner.
The prototype compiler, which targets multiple instances of a CGRA, is demonstrated in this thesis. We used this compiler to target multiple variants of CGRAs employing dataflow execution paradigm. We varied the reconfigur-able fabric, orchestration mechanism employed, size of instruction buffers. We also choose applications from two different domains viz. cryptography and linear algebra. The execution time of the CGRA (the best among all instances) is compared against an Intel Quad core processor. Cryptography applications show a performance improvement ranging from more than one order of magnitude to close to two orders of magnitude. These applications have large amounts of ILP and our compiler could successfully expose the ILP available in these applications. Further, the domain customization also played an important role in achieving good performance. We employed two custom functional units for accelerating Cryptography applications and compiler could efficiently use them. In linear algebra kernels we observe multiple iterations of the loop executing in parallel, effectively exploiting loop-level parallelism at runtime. Inspite of this we notice close to an order of magnitude performance degradation. The reason for this degradation can be attributed to the use of non-pipelined floating point units, and the delays involved in accessing memory. Pipeline parallelism was demonstrated using this compiler for FFT and QR factorization. Thus, the compiler is capable of efficiently supporting different kinds of parallelism and can support complete C89 standard. Further, the compiler can also support different instances of CGRAs employing dataflow execution paradigm.
|
45 |
Autonomous Recovery Of Reconfigurable Logic Devices Using Priority Escalation Of SlackImran, Naveed 01 January 2013 (has links)
Field Programmable Gate Array (FPGA) devices offer a suitable platform for survivable hardware architectures in mission-critical systems. In this dissertation, active dynamic redundancy-based fault-handling techniques are proposed which exploit the dynamic partial reconfiguration capability of SRAM-based FPGAs. Self-adaptation is realized by employing reconfiguration in detection, diagnosis, and recovery phases. To extend these concepts to semiconductor aging and process variation in the deep submicron era, resilient adaptable processing systems are sought to maintain quality and throughput requirements despite the vulnerabilities of the underlying computational devices. A new approach to autonomous fault-handling which addresses these goals is developed using only a uniplex hardware arrangement. It operates by observing a health metric to achieve Fault Demotion using Recon- figurable Slack (FaDReS). Here an autonomous fault isolation scheme is employed which neither requires test vectors nor suspends the computational throughput, but instead observes the value of a health metric based on runtime input. The deterministic flow of the fault isolation scheme guarantees success in a bounded number of reconfigurations of the FPGA fabric. FaDReS is then extended to the Priority Using Resource Escalation (PURE) online redundancy scheme which considers fault-isolation latency and throughput trade-offs under a dynamic spare arrangement. While deep-submicron designs introduce new challenges, use of adaptive techniques are seen to provide several promising avenues for improving resilience. The scheme developed is demonstrated by hardware design of various signal processing circuits and their implementation on a Xilinx Virtex-4 FPGA device. These include a Discrete Cosine Transform (DCT) core, Motion Estimation (ME) engine, Finite Impulse Response (FIR) Filter, Support Vector Machine (SVM), and Advanced Encryption Standard (AES) blocks in addition to MCNC benchmark circuits. A iii significant reduction in power consumption is achieved ranging from 83% for low motion-activity scenes to 12.5% for high motion activity video scenes in a novel ME engine configuration. For a typical benchmark video sequence, PURE is shown to maintain a PSNR baseline near 32dB. The diagnosability, reconfiguration latency, and resource overhead of each approach is analyzed. Compared to previous alternatives, PURE maintains a PSNR within a difference of 4.02dB to 6.67dB from the fault-free baseline by escalating healthy resources to higher-priority signal processing functions. The results indicate the benefits of priority-aware resiliency over conventional redundancy approaches in terms of fault-recovery, power consumption, and resource-area requirements. Together, these provide a broad range of strategies to achieve autonomous recovery of reconfigurable logic devices under a variety of constraints, operating conditions, and optimization criteria.
|
Page generated in 0.0503 seconds