Return to search

Algebraic derivation of neural networks and its applications in image processing

Artificial neural networks are systems composed of interconnected simple computing units known as artificial neurons which simulate some properties of their biological counterparts.
They have been developed and studied for understanding how brains function, and for computational purposes.
In order to use a neural network for computation, the network has to be designed in such a way that it performs a useful function. Currently, the most popular method of designing
a network to perform a function is to adjust the parameters of a specified network until the network approximates the input-output behaviour of the function. Although some analytical knowledge about the function is sometimes available or obtainable, it is usually not used. Some neural network paradigms exist where such knowledge is utilized; however, there is no systematical method to do so. The objective of this research is to develop such a method.
A systematic method of neural network design, which we call algebraic derivation methodology, is proposed and developed in this thesis. It is developed with an emphasis on designing neural networks to implement image processing algorithms. A key feature of this methodology is that neurons and neural networks are represented symbolically such that a network can be algebraically derived from a given function and the resulting network can be simplified. By simplification we mean finding an equivalent network (i.e., performing the same function) with fewer layers and fewer neurons. A type of neural networks, which we call LQT networks, are chosen for implementing image processing algorithms.
Theorems for simplifying such networks are developed. Procedures for deriving such networks to realize both single-input and multiple-input functions are given.
To show the merits of the algebraic derivation methodology, LQT networks for implementing
some well-known algorithms in image processing and some other areas are developed by using the above mentioned theorems and procedures. Most of these networks
are the first known such neural network models; in the case there are other known network models, our networks have the same or better performance in terms of computation
time. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate

Identiferoai:union.ndltd.org:UBC/oai:circle.library.ubc.ca:2429/31511
Date January 1991
CreatorsShi, Pingnan
PublisherUniversity of British Columbia
Source SetsUniversity of British Columbia
LanguageEnglish
Detected LanguageEnglish
TypeText, Thesis/Dissertation
RightsFor non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use.

Page generated in 0.002 seconds