As the complexity of embedded systems grows, there is an increasing use of operating systems (OSes) in embedded devices, such as mobile phones, media players and other consumer electronics. Despite their convenience and flexibility, such operating systems can be overly general and contain features and code that are not needed in every application context, which incurs unnecessary performance overheads. In most embedded systems, resources, such as processing power, available memory, and power consumption, are strictly constrained. In particular, the amount of memory on embedded devices is often very limited. This, together with the popular usage of operating systems in embedded devices, makes it important to reduce the memory footprint of operating systems. This dissertation addresses this challenge and presents automated ways to reduce the memory footprint of OS kernels for embedded systems. First, we present kernel code compaction, an automated approach that reduces the code size of an OS kernel statically by removing unused functionality. OS kernel code tends to be different from ordinary application code, including the presence of a significant amount of hand-written assembly code, multiple entry points, implicit control flow paths involving interrupt handlers, and frequent indirect control flow via function pointers. We use a novel "approximated compilation" technique to apply source-level pointer analysis to hand-written assembly code. A prototype implementation of our idea on an Intel x86 platform and a minimally configured Linux kernel obtains a code size reduction of close to 24%.Even though code compaction can remove a portion of the entire OS kernel code, when exercised with typical embedded benchmarks, such as MiBench, most kernel code is executed infrequently if at all. Our second contribution is on-demand code loading, an automated approach that keeps the rarely used code on secondary storage and loads it into main memory only when it is needed. In order to minimize the overhead of code loading, a greedy node-coalescing algorithm is proposed to group closely related code together. The experimental results show that this approach can reduce memory requirements for the Linux kernel code by about 53%with little degradation in performance. Last, we describe dynamic data structure compression, an approach that reduces the runtime memory footprint of dynamic data structures in an OS kernel. A prototype implementation for the Linux kernel reduces the memory consumption of the slab allocators in Linux by 17.5%when running the MediaBench suite while incurring only minimal increases in execution time (1.9%).
Identifer | oai:union.ndltd.org:arizona.edu/oai:arizona.openrepository.com:10150/196010 |
Date | January 2009 |
Creators | He, Haifeng |
Contributors | Debray, Saumya K., Andrews, Gregory R., Debray, Saumya K., Andrews, Gregory R., Hartman, John, Gniady, Chris |
Publisher | The University of Arizona. |
Source Sets | University of Arizona |
Language | English |
Detected Language | English |
Type | text, Electronic Dissertation |
Rights | Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author. |
Page generated in 0.002 seconds