• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

On Optimizing Die-stacked DRAM Caches

El Nacouzi, Michel 22 November 2013 (has links)
Die-stacking is a new technology that allows multiple integrated circuits to be stacked on top of each other while connected with a high-bandwidth and high-speed interconnect. In particular, die-stacking can be useful in boosting the effective bandwidth and speed of DRAM systems. Die-stacked DRAM caches have recently emerged as one of the top applications of die-stacking. They provide higher capacity than their SRAM counterparts and are faster than offchip DRAMs. In addition, DRAM caches can provide almost eight times the bandwidth of off-chip DRAMs. They, however, come with their own challenges. Since they are only twice as fast as main memory, they considerably increase latency for misses and incur significant energy overhead for remote lookups in snoop-based multi-socket systems. In this thesis, we present a Dual-Grain Filter for avoiding unnecessary accesses to the DRAM cache at reduced hardware cost and we compare it to recent works on die-stacked DRAM caches.
2

On Optimizing Die-stacked DRAM Caches

El Nacouzi, Michel 22 November 2013 (has links)
Die-stacking is a new technology that allows multiple integrated circuits to be stacked on top of each other while connected with a high-bandwidth and high-speed interconnect. In particular, die-stacking can be useful in boosting the effective bandwidth and speed of DRAM systems. Die-stacked DRAM caches have recently emerged as one of the top applications of die-stacking. They provide higher capacity than their SRAM counterparts and are faster than offchip DRAMs. In addition, DRAM caches can provide almost eight times the bandwidth of off-chip DRAMs. They, however, come with their own challenges. Since they are only twice as fast as main memory, they considerably increase latency for misses and incur significant energy overhead for remote lookups in snoop-based multi-socket systems. In this thesis, we present a Dual-Grain Filter for avoiding unnecessary accesses to the DRAM cache at reduced hardware cost and we compare it to recent works on die-stacked DRAM caches.
3

IMPROVING THE PERFORMANCE AND ENERGY EFFICIENCY OF EMERGING MEMORY SYSTEMS

Guo, Yuhua 01 January 2018 (has links)
Modern main memory is primarily built using dynamic random access memory (DRAM) chips. As DRAM chip scales to higher density, there are mainly three problems that impede DRAM scalability and performance improvement. First, DRAM refresh overhead grows from negligible to severe, which limits DRAM scalability and causes performance degradation. Second, although memory capacity has increased dramatically in past decade, memory bandwidth has not kept pace with CPU performance scaling, which has led to the memory wall problem. Third, DRAM dissipates considerable power and has been reported to account for as much as 40% of the total system energy and this problem exacerbates as DRAM scales up. To address these problems, 1) we propose Rank-level Piggyback Caching (RPC) to alleviate DRAM refresh overhead by servicing memory requests and refresh operations in parallel; 2) we propose a high performance and bandwidth efficient approach, called SELF, to breaking the memory bandwidth wall by exploiting die-stacked DRAM as a part of memory; 3) we propose a cost-effective and energy-efficient architecture for hybrid memory systems composed of high bandwidth memory (HBM) and phase change memory (PCM), called Dual Role HBM (DR-HBM). In DR-HBM, hot pages are tracked at a cost-effective way and migrated to the HBM to improve performance, while cold pages are stored at the PCM to save energy.

Page generated in 0.0398 seconds