Spelling suggestions: "subject:"detector processor"" "subject:"colector processor""
1 |
FPGA-based Soft Vector ProcessorsYiannacouras, Peter 23 February 2010 (has links)
FPGAs are increasingly used to implement embedded digital systems because of their low time-to-market and low costs compared to integrated circuit design, as well as their superior performance and area over a general purpose microprocessor. However, the hardware design necessary to achieve this superior performance and area is very difficult to perform causing long design times and preventing wide-spread adoption of FPGA technology. The amount of hardware design can be reduced by employing a microprocessor for less-critical computation in the system. Often this microprocessor is implemented using the FPGA reprogrammable fabric as a soft processor which can preserve the benefits of a single-chip FPGA solution without specializing the device with dedicated hard processors. Current soft processors have simple architectures that provide performance adequate for only the least-critical computations.
Our goal is to improve soft processors by scaling their performance and expanding their suitability to more critical computation. To this end we focus on the data parallelism found in many embedded applications and propose that soft processors be augmented with vector extensions to exploit this parallelism. We support this proposal through experimentation with a parameterized soft vector processor called VESPA (Vector Extended Soft Processor Architecture) which is designed, implemented, and evaluated on real FPGA hardware.
The scalability of VESPA combined with several other architectural parameters can be used to finely span a large design space and derive a custom architecture for exactly matching the needs of an application. Such customization is a key advantage for soft processors since their architectures can be easily reconfigured by the end-user. Specifically, customizations can be made to the pipeline, functional units, and memory system within VESPA. In addition, general purpose overheads can be automatically eliminated from VESPA.
Comparing VESPA to manual hardware design, we observe a 13x speed advantage for hardware over our fastest VESPA, though this is significantly less than the 500x speed advantage over scalar soft processors. The performance-per-area of VESPA is also observed to be significantly higher than a scalar soft processor suggesting that the addition of vector extensions makes more efficient use of silicon area for data parallel workloads.
|
2 |
FPGA-based Soft Vector ProcessorsYiannacouras, Peter 23 February 2010 (has links)
FPGAs are increasingly used to implement embedded digital systems because of their low time-to-market and low costs compared to integrated circuit design, as well as their superior performance and area over a general purpose microprocessor. However, the hardware design necessary to achieve this superior performance and area is very difficult to perform causing long design times and preventing wide-spread adoption of FPGA technology. The amount of hardware design can be reduced by employing a microprocessor for less-critical computation in the system. Often this microprocessor is implemented using the FPGA reprogrammable fabric as a soft processor which can preserve the benefits of a single-chip FPGA solution without specializing the device with dedicated hard processors. Current soft processors have simple architectures that provide performance adequate for only the least-critical computations.
Our goal is to improve soft processors by scaling their performance and expanding their suitability to more critical computation. To this end we focus on the data parallelism found in many embedded applications and propose that soft processors be augmented with vector extensions to exploit this parallelism. We support this proposal through experimentation with a parameterized soft vector processor called VESPA (Vector Extended Soft Processor Architecture) which is designed, implemented, and evaluated on real FPGA hardware.
The scalability of VESPA combined with several other architectural parameters can be used to finely span a large design space and derive a custom architecture for exactly matching the needs of an application. Such customization is a key advantage for soft processors since their architectures can be easily reconfigured by the end-user. Specifically, customizations can be made to the pipeline, functional units, and memory system within VESPA. In addition, general purpose overheads can be automatically eliminated from VESPA.
Comparing VESPA to manual hardware design, we observe a 13x speed advantage for hardware over our fastest VESPA, though this is significantly less than the 500x speed advantage over scalar soft processors. The performance-per-area of VESPA is also observed to be significantly higher than a scalar soft processor suggesting that the addition of vector extensions makes more efficient use of silicon area for data parallel workloads.
|
3 |
Speeding up matrix computation kernels by sharing vector coprocessor among multiple cores on chipDahlberg, Christopher January 2012 (has links)
Today’s computer systems develop towards less energy consumption while keeping high performance. These are contradictory requirement and pose a great challenge. A good example of an application were this is used is the smartphone. The constraints are on long battery time while getting high performance required by future 2D/3D applications. A solution to this is heterogeneous systems that have components that are specialized in different tasks and can execute them fast with low energy consumption. These could be specialized i.e. encoding/decoding, encryption/decryption, image processing or communication. At the apartment of Computer Architecture and Parallel Processing Laboratory (CAPPL) at New Jersey Institute of Technology (NJIT) a vector co-processor has been developed. The Vector co-processor has the unusual feature of being able to receive instructions from multiple hosts (scalar cores). In addition to this a test system with a couple of scalar processors using the vector processor has been developed. This thesis describes this processor and its test system. It also shows the development of math applications involving matrix operations. This results in the conclusions of the vector co-processing saving substantial amount of energy while speeding up the execution of the applications. In addition to this the thesis will describe an extension of the vector co-processor design that makes it possible to monitor the throughput of instructions and data in the processor.
|
Page generated in 0.0685 seconds