Return to search

Hardware acceleration of convolutional neural networks on FPGA

With the evolution of machine learning algorithms they are seeing a wider use in traditional signal processing applications. One of these areas is in radios for improved signal identification algorithms. With the large computational complexity of convolutional neural networks, it is of importance to use platforms that are as fast and energy efficient as possible. This thesis investigates hardware acceleration of convolutional neural networks on field programmable gate arrays, an reconfigurable integrated circuit. An existing toolflow, Haddoc2, is used and evaluated. This tool automates the mapping of a convolutional neural network from a high level description in Caffe to a synthesisable hardware description in VHDL hardware description language. Four models of different sizes are trained on the MNIST dataset and accelerators for these at different bitwidths are generated and then simulated in a VHDL testbench. The resulting accuracies are tolerable for the target problem and Haddoc2 can produce fast accelerators that would work well for smaller networks. Big networks was found to consume large amounts of resources in the field programmable gate array and is not feasible for a practical application. The treatment of weights as constants makes the accelerators fast since there is no memory bottleneck but makes the accelerator less flexible since a new set of weights would require to re-synthesize the design and reprogramming the field programmable gate array.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:uu-402937
Date January 2020
CreatorsMyrén, Adam
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess
RelationUPTEC E, 1654-7616 ; 20 001

Page generated in 0.0022 seconds