Return to search

A General-Purpose GPU Reservoir Computer

The reservoir computer comprises a reservoir of possibly non-linear, possibly chaotic dynamics. By perturbing and taking outputs from this reservoir, its dynamics may be harnessed to compute complex problems at “the edge of chaos”. One of the first forms of reservoir computer, the Echo State Network (ESN), is a form of artificial neural network that builds its reservoir from a large and sparsely connected recurrent neural network (RNN). The ESN was initially introduced as an innovative solution to train RNNs which, up until that point, was a notoriously difficult task. The innovation of the ESN is that, rather than train the RNN weights, only the output is trained. If this output is assumed to be linear, then linear regression may be used.

This work presents an effort to implement the Echo State Network, and an offline linear regression training method based on Tikhonov regularisation. This implementation targeted the general purpose graphics processing unit (GPU or GPGPU). The behaviour of the implementation was examined by comparing it with a central processing unit (CPU) implementation, and by assessing its performance against several studied learning problems. These assessments were performed using all 4 cores of the Intel i7-980 CPU and an Nvidia GTX480. When compared with a CPU implementation, the GPU ESN implementation demonstrated a speed-up starting from a reservoir size of between 512 and 1,024. A maximum speed-up of approximately 6 was observed at the largest reservoir size tested (2,048). The Tikhonov regularisation (TR) implementation was also compared with a CPU implementation. Unlike the ESN execution, the GPU TR implementation was largely slower than the CPU implementation. Speed-ups were observed at the largest reservoir and state history sizes, the largest of which was 2.6813. The learning behaviour of the GPU ESN was tested on three problems, a sinusoid, a Mackey-Glass time-series, and a multiple superimposed oscillator (MSO). The normalised root-mean squared errors of the predictors were compared. The best observed sinusoid predictor outperformed the best MSO predictor by 4 orders of magnitude. In turn, the best observed MSO predictor outperformed the best Mackey-Glass predictor by 2 orders of magnitude.

Identiferoai:union.ndltd.org:canterbury.ac.nz/oai:ir.canterbury.ac.nz:10092/7617
Date January 2013
CreatorsKeith, Tūreiti
PublisherUniversity of Canterbury. Department of Electrical & Computer Engineering
Source SetsUniversity of Canterbury
LanguageEnglish
Detected LanguageEnglish
TypeElectronic thesis or dissertation, Text
RightsCopyright Tūreiti Keith, http://library.canterbury.ac.nz/thesis/etheses_copyright.shtml
RelationNZCU

Page generated in 0.0019 seconds