Return to search

Regression Modelling of Power Consumption for Heterogeneous Processors

This thesis is composed of two parts, that relate to both parallel and heterogeneous processing.
The first describes DistCL, a distributed OpenCL framework that allows a cluster of GPUs to be programmed like a single device.
It uses programmer-supplied meta-functions that associate work-items to memory.
DistCL achieves speedups of up to 29x using 32 peers.
By comparing DistCL to SnuCL, we determine that the compute-to-transfer ratio of a benchmark is the best predictor of its performance scaling when distributed.
The second is a statistical power model for the AMD Fusion heterogeneous processor.
We present a systematic methodology to create a representative set of compute micro-benchmarks using data collected from real hardware.
The power model is created with data from both micro-benchmarks and application benchmarks.
The model showed an average predictive error of 6.9% on heterogeneous workloads.
The Multi2Sim heterogeneous simulator was modified to support configurable power modelling.

Identiferoai:union.ndltd.org:TORONTO/oai:tspace.library.utoronto.ca:1807/42818
Date22 November 2013
CreatorsDiop, Tahir
ContributorsAnderson, Jason, Enright Jerger, Natalie
Source SetsUniversity of Toronto
Languageen_ca
Detected LanguageEnglish
TypeThesis

Page generated in 0.0015 seconds