Spelling suggestions: "subject:"DNN accelerators"" "subject:"DNN accelerator's""
1 |
ARCHITECTURE AND MAPPING CO-EXPLORATION AND OPTIMIZATION FOR DNN ACCELERATORSTrewin, Benjamin Nicholas 01 May 2024 (has links) (PDF)
It is extremely difficult to optimize a deep neural network (DNN) accelerator’s performance on various networks in terms of energy and/or latency because of the sheer size of the search space. Not only do DNN accelerators have a huge search space of different hardware architecture topologies and characteristics, which may perform better or worse on certain DNNs, but also DNN layers can be mapped to hardware in a huge array of different configurations. Further, an optimal mapping for one DNN architecture is not consistently the same on a different architecture. These two factors depend on one another. Thus there is a need for co-optimization to take place so hardware characteristics and mapping can be optimized simultaneously, to find not only an optimal mapping but also the best architecture for a DNN as well. This work presents Blink, a design space exploration (DSE) tool, which co-optimizes hardware attributes and mapping configurations. This tool enables users to find optimal hardware architectures through the use of a genetic algorithm and further finds optimal mappings for each hardware configuration using a pruned random selection method. Architecture, layers, and mappings are each sent to Timeloop, a DNN accelerator simulator, to obtain accelerator statistics, which are sent back to the genetic algorithm for next population selection. Through this method, novel DNN accelerator solutions can be identified without tackling the computationally massive task of simulating exhaustively.
|
Page generated in 0.067 seconds