Return to search

Towards time predictable and efficient cache management in multi-threaded systems

Once the cache memory was introduced in computer systems, the well-known gap in speeds between the memory and the CPU was reduced. However, various issues can occur within the cache, which has a significant impact on the performance and timing-predictability of an application. This thesis investigates one such issue, which is a cache contention. Most commonly, this problem can be detected inside of multicore architecture, but also can be present within all systems that use a scheduler with multiple threads. In this thesis, we show a scenario where the cache contention occurs locally in the L1 data cache on a single-core, multi-threaded system. In this way, we will be able to examine the impact of local cache contention on system performance and timing-predictability. We furthermore mitigate cache contention through a way-based partitioning technique, where we propose a way to avoid cache contention, while still maintaining reasonable overall performance. Our results show that way-partitioning offers inter-thread isolation whilst showing a slight performance drop

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:mdh-51069
Date January 2020
CreatorsZivojevic, Vildan
PublisherMälardalens högskola, Akademin för innovation, design och teknik
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.0017 seconds