This report evaluates whether an interpreted high-level garbage collected language has enough information about its memory behaviour to make better cache decisions than modern general CPU hardware. With a generational garbage collector, depending on promotion algorithm and generation size, around 90% of all objects never leave the first generation. This report is based on the hypothesis that, because of the low promotion rate, accesses to higher generations are sufficiently rare not to benefit from caching. To test this hypothesis, we built an operating system with a Scheme interpreter in kernel mode, where the interpreter controls the cache. Generic x86 PC hardware was used, since it allows fine-grained control of cache decisions. Measurements of execution time in this interpreter show that disabling the cache for generations higher than the first does not give any performance gain, but rather a performance loss of up to 50%. We conclude that this interpreter design is not an improvement, but cannot conclude that the hypothesis is false in general. We suggest building a better CPU simulator to gather more data from which to make better caching decisions, moving internal interpreter data structures into the garbage collected heap and modifying the hardware to allow control in the currently rigid dimension of where data is cached---for example separate control of instruction and data caches and separate data caches for different areas of memory.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:liu-7557 |
Date | January 2006 |
Creators | Karlsson, Karl-Johan |
Publisher | Linköpings universitet, Institutionen för datavetenskap, Institutionen för datavetenskap |
Source Sets | DiVA Archive at Upsalla University |
Language | Swedish |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0022 seconds