Return to search

Paralelní trénování neuronových sítí pro rozpoznávání řeči / Parallel Training of Neural Networks for Speech Recognition

This thesis deals with different parallelizations of training procedure for artificial neural networks. The networks are trained as phoneme-state acoustic descriptors for speech recognition. Two effective parallelization strategies were implemented and compared. The first strategy is data parallelization, where the training is split into several POSIX threads. The second strategy is node parallelization, which uses CUDA framework for general purpose computing on modern graphic cards. The first strategy showed a 4x speed-up, while using the second strategy we observed nearly 10x speed-up. The Stochastic Gradient Descent algorithm with error backpropagation was used for the training. After a short introduction, the second chapter of this thesis shows the motivation and introduces the neural networks into the context of speech recognition. The third chapter is theoretical, the anatomy of a neural network and the used training method are discussed. The following chapters are focused on the design and implementation of the project, while the phases of the iterative development are described. The last extensive chapter describes the setup of the testing system and reports the experimental results. Finally, the obtained results are concluded and the possible extensions of the project are proposed.

Identiferoai:union.ndltd.org:nusl.cz/oai:invenio.nusl.cz:237184
Date January 2010
CreatorsVeselý, Karel
ContributorsFousek, Petr, Burget, Lukáš
PublisherVysoké učení technické v Brně. Fakulta informačních technologií
Source SetsCzech ETDs
LanguageCzech
Detected LanguageEnglish
Typeinfo:eu-repo/semantics/masterThesis
Rightsinfo:eu-repo/semantics/restrictedAccess

Page generated in 0.0025 seconds