Return to search

Universal approximation theory of neural networks

Historically, artificial neural networks have been loosely defined as biologically inspired computational models. When deciding what sort of network to use for a given task there are two things that need to be considered. The first is the representational power of the given network, that is what class of problems can be solved by this network? Given a set of problems to be solved by neural networks, a network that can solve any of these problems is called a universal approximator. The second problem is the ability to find a desired network given an initial network via a learning rule. Here we are interested in the question
of universal approximation. A general definition of artificial neural networks is provided along with definitions for different kinds of universal approximation. We then prove that the recurrent temporal restricted Boltzmann machine (RTRBM) satisfies a general type of universal approximation for stochastic processes, an extention of previous results for the simple RBM. We conclude by examining the potential use of such temporal artificial neural networks in the biological process of perception. / Graduate

Identiferoai:union.ndltd.org:uvic.ca/oai:dspace.library.uvic.ca:1828/7040
Date15 January 2016
CreatorsOdense, Simon
ContributorsEdwards, Roderick
Source SetsUniversity of Victoria
LanguageEnglish, English
Detected LanguageEnglish
TypeThesis
Formatapplication/pdf
RightsAvailable to the World Wide Web

Page generated in 0.002 seconds