The large difference between processor CPU cycle time and memory access time, often referred to as the memory wall, severely limits the performance of streaming applications. Some data centers have shown servers being idle three out of four clocks. High performance instruction sequenced systems are not energy efficient. The execute stage of even simple pipeline processors only use 9% of the pipeline's total energy. A hybrid dataflow system within a memory module is shown to have 7.2 times the performance with 368 times better energy efficiency than an Intel Xeon server processor on the analyzed benchmarks.
The dataflow implementation exploits the inherent parallelism and pipelining of the application to improve performance without the overhead functions of caching, instruction fetch, instruction decode, instruction scheduling, reorder buffers, and speculative execution used by high performance out-of-order processors. Coarse grain reconfigurable logic in an energy efficient silicon process provides flexibility to implement multiple algorithms in a low energy solution. Integrating the logic within a 3D stacked memory module provides lower latency and higher bandwidth access to memory while operating independently from the host system processor.
Identifer | oai:union.ndltd.org:unt.edu/info:ark/67531/metadc1248478 |
Date | 08 1900 |
Creators | Shelor, Charles F. |
Contributors | Kavi, Krishna, Bryant, Barrett, Fu, Sung, Cytron, Ron |
Publisher | University of North Texas |
Source Sets | University of North Texas |
Language | English |
Detected Language | English |
Type | Thesis or Dissertation |
Format | viii, 131 pages, Text |
Rights | Public, Shelor, Charles F., Copyright, Copyright is held by the author, unless otherwise noted. All rights Reserved. |
Page generated in 0.0025 seconds