Return to search

LEAST-RECENTLY-USED (LRU) CIRCUIT DESIGN FOR PRIORITIZED CACHE

In modern embedded systems, real-time applications are often executed on multi-core systems that also run non real-time critical applications. It is well known that cache sharing among multi-core systems or concurrent threads running on a single CPU potentially causes real-time application execution delays. This makes the worst-case execution time (WCET) prediction of these real-time applications more difficult. An encouraging approach to address this problem is prioritized cache. Currently, the implementation of prioritized cache is done at the architectural level using cache controllers. This thesis focuses on the implementation of two prioritized LRU (least-recently-used) cache replacement policy circuits inside the cache circuit to support the prioritized cache operation. This will decrease cache latency. The circuits are implemented using the Synopsys 28nm EDK. Based on the circuit implementation, the area and power overheads associated with prioritized cache are investigated. Two prioritized LRU circuit designs are presented.

Identiferoai:union.ndltd.org:siu.edu/oai:opensiuc.lib.siu.edu:theses-2570
Date01 December 2014
CreatorsEaton, Ronald
PublisherOpenSIUC
Source SetsSouthern Illinois University Carbondale
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceTheses

Page generated in 0.0023 seconds