1 |
Improving processor power demand comprehension in data-driven power and software phase classification and predictionKhoshbakht, Saman 14 August 2018 (has links)
The single-core performance trend predicted by Moore's law has been impeded in recent years partly due to the limitations imposed by increasing processor power demands. One way to mitigate this limitation in performance improvement is the introduction of multi-core and multi-processor computation.
Another approach to increasing the performance-per-Watt metric is to utilize the processor's power more efficiently. In a single-core system, the processor cannot sustainably dissipate more than the nominal Thermal Design Power (TDP) limit determined for the processor at design time. Therefore it is important to understand and manage the power demands of the processes being executed. This principle also applies to multi-core and multi-processor environments. In a multi-processor environment, knowing the power demands of the workload, the power management unit can schedule the workload to a processor based on the state of each processor and process in the most efficient way. This is an example of the knapsack problem. Another approach, also applicable to multi-cores, could be to reduce the core's power by reducing its working voltage and frequency, leading to mitigation of the power bursts, lending more headroom to other cores, and keeping the total power under the TDP limit. The information collected from the execution of the software running on the processor (i.e. the workload) is the key to determining the actions needed with regards to power management at any given time.
This work comprises two different approaches in improving the comprehension of software power demands as it executes on the processor. In the first part of this work, the effects of software data on power is analysed. It is important to be able to model the power based on the instructions it comprises, however, to the best of our knowledge, no work exists in which the effects of the values being processed has been investigated with regards to processor power. Creating a power model capable of accurately reflecting the power demands of the software at any given time is a problem addressed by previous research. The software power model can be used in processor simulation environments as well as in the processor itself to create an estimated power dissipation without the need to physically measure the power. In the first part of this research, the effects of software data on power is investigated. In order to collect the data required as part of this research, a profiler tool has been developed by the author and used in this part of the research as well as the second part.
The second part of this work focuses on the development of processor power throughout time during the execution of the software. Understanding the power demands of the processor at any given time is important to maintain and manage processor power. Additionally, acquiring an insight into the future power demands of the software can help the system with scheduling planning ahead of time, in order to prepare for any high-power section of the code as well as to plan to use the available power headroom as a result of an upcoming low-power section. In this part of our work, a new hierarchical approach to software phase classification is developed. Software phase classification problem focuses on determining the behaviour of the software at any given time slice by assigning the time slice to one of pre-determined software phases. Each phase is assumed to have known behaviour which was previously measured and instrumented based on previously observed instances of the phase, or by utilizing a model capable of estimating the behaviour of each phase. Using a two-tiered hierarchical clustering approach, our proposed phase classification methodology incorporates the recent performance behaviour of the software in order to determine the power phase. We focused on determining the power phase using the performance information because the real processor power is not usually available without the need for added hardware, while there exists a large number of different performance counters available on most modern processors. Additionally, based on our observations, the relation between performance phases and power behaviour is highly predictable. This method is shown to provide robust results with a low amount of noise compared to other methods, while providing a high enough timing accuracy for the processor to act on. To the best of our knowledge, no other existing work is able to provide both timing accuracy and reduced noise compared to our work.
Software phase classification can be used to control the processor power based on the software's phase at any given time, but it does not provide future insight into the progression of the workload. Finally, we developed and compared several phase prediction methodologies based on phase precursors and phase locality concepts. Phase precursor-based methods rely on detecting the precursors observed before the software enters a certain phase, while phase locality methods rely on the locality principle, which postulates a high probability for the current software behaviour to be observed in the near-future. The phase classification, as well as phase prediction methodologies was shown to be able to reduce the power bursts within a workload in order to provide a more smooth power trace. As the bursts are removed from one workload's power trace, the multi-core processor power headroom can be confidently utilized for another process. / Graduate
|
Page generated in 0.1075 seconds