• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Compression and Distribution of a Neural Network With IoT Applications

Backe, Hannes, Rydberg, David January 2021 (has links)
In order to enable deployment of large neuralnetwork models on devices with limited memory capacity, refinedmethods for compressing these are essential. This project aimsat investigating some possible solutions, namely pruning andpartitioned logit based knowledge distillation, using teacherstudentlearning methods. A cumbersome benchmark teacherneural network was developed and used as a reference. A specialcase of logit based teacher-student learning was then applied,resulting not only in a compressed model, but also in a convenientway of distributing it. The individual student models were ableto mimic the parts of the teacher model with small losses, whilethe network of student models achieved similar accuracy as theteacher model. Overall, the size of the network of student modelswas around 11% of the teacher. Another popular method ofcompressing neural networks was also tested - pruning. Pruningthe teacher network resulted in a much smaller model, around18% of the teacher model, with similar accuracy. / För att möjliggöra användning av storaneurala nätverksmodeller på enheter med begränsad minneskapacitetkrävs raffinerade metoder för komprimering av dessa.Detta projekt syftar till att undersöka några möjliga lösningar,nämligen pruning och partitionerad logit-baserad knowledgedistillation, med hjälp av teacher-student-träning. Ett stortriktmärkesnätverk utvecklades och användes som referens. Enspeciell typ av logit-baserad teacher-student-träning tillämpadessedan, vilket inte bara resulterade i en komprimerad modellutan också i ett smidigt sätt att distribuera den på. De enskildastudent-modellerna kunde efterlikna delar av teachermodellenmed små förluster, medan nätverket av studentmodelleruppnådde ungefär samma noggrannhet som teachermodellen.Sammantaget uppmättes storleken av nätverket avstudent-modeller till cirka 11 % av teacher-modellen. En annanpopulär metod för komprimering av neurala nätverk testadesockså pruning. Pruning av teacher-modellen resulterade i enmycket mindre modell, cirka 18 % av teacher-modellen i termerav storlek, med liknande noggrannhet. / Kandidatexjobb i elektroteknik 2021, KTH, Stockholm

Page generated in 0.1233 seconds