Return to search

Efficient use of resources when implementing machine learning in an embedded environment

Machine learning and in particular deep-learning models have been in the spotlight for the last year. Particularly the release of ChatGPT caught the attention of the public. But many of the most popular models are large with millions or billions of parameters. Parallel with this, the number of smart products constituting the Internet of Things is rapidly increasing. The need for small resource-efficient machine-learning models can therefore be expected to increase in the coming years. This work investigates the implementation of two different models in embedded environments. The investigated models are, random forests, that are straight-forward and relatively easy to implement, and transformer models, that are more complex and challenging to implement. The process of training the models in a high-level language and implementing and running inference in a low-level language has been studied. It is shown that it is possible to train a transformer in Python and export it by hand to C, but that it comes with several challenges that should be taken into consideration before this approach is chosen. It is also shown that a transformer model can be successfully used for signal extraction, a new area of application. Different possible ways of optimizing the model, such as pruning and quantization, have been studied. Finally, it has been shown that a transformer model with an initial noise-filter performs better than the existing hand-written code on self-generated messages, but worse on real-world data. This indicates that the training data should be improved.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:umu-211379
Date January 2023
CreatorsEklöf, Johannes
PublisherUmeå universitet, Institutionen för fysik
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.0027 seconds