Spelling suggestions: "subject:"beural networks (computer)"" "subject:"beural networks (coomputer)""
1 |
Dynamic construction of back-propagation artificial neural networks.January 1991 (has links)
by Korris Fu-lai Chung. / Thesis (M.Phil.) -- Chinese University of Hong Kong, 1991. / Bibliography: leaves R-1 - R-5. / LIST OF FIGURES --- p.vi / LIST OF TABLES --- p.viii / Chapter 1 --- INTRODUCTION / Chapter 1.1 --- Recent Resurgence of Artificial Neural Networks --- p.1-1 / Chapter 1.2 --- A Design Problem in Applying Back-Propagation Networks --- p.1-4 / Chapter 1.3 --- Related Works --- p.1-6 / Chapter 1.4 --- Objective of the Research --- p.1-8 / Chapter 1.5 --- Thesis Organization --- p.1-9 / Chapter 2 --- MULTILAYER FEEDFORWARD NETWORKS (MFNs) AND BACK-PRO- PAGATION (BP) LEARNING ALGORITHM / Chapter 2.1 --- Introduction --- p.2-1 / Chapter 2.2 --- From Perceptrons to MFNs --- p.2-2 / Chapter 2.3 --- From Delta Rule to BP Algorithm --- p.2-6 / Chapter 2.4 --- A Variant of BP Algorithm --- p.2-12 / Chapter 3 --- INTERPRETATIONS AND PROPERTIES OF BP NETWORKS / Chapter 3.1 --- Introduction --- p.3-1 / Chapter 3.2 --- A Pattern Classification View on BP Networks --- p.3-2 / Chapter 3.2.1 --- Pattern Space Interpretation of BP Networks --- p.3-2 / Chapter 3.2.2 --- Weight Space Interpretation of BP Networks --- p.3-3 / Chapter 3.3 --- Local Minimum --- p.3-5 / Chapter 3.4 --- Generalization --- p.3-6 / Chapter 4 --- GROWTH OF BP NETWORKS / Chapter 4.1 --- Introduction --- p.4-1 / Chapter 4.2 --- Problem Formulation --- p.4-1 / Chapter 4.3 --- Learning an Additional Pattern --- p.4-2 / Chapter 4.4 --- A Progressive Training Algorithm --- p.4-4 / Chapter 4.5 --- Experimental Results and Performance Analysis --- p.4-7 / Chapter 4.6 --- Concluding Remarks --- p.4-16 / Chapter 5 --- PRUNING OF BP NETWORKS / Chapter 5.1 --- Introduction --- p.5-1 / Chapter 5.2 --- Characteristics of Hidden Nodes in Oversized Networks --- p.5-2 / Chapter 5.2.1 --- Observations from an Empirical Study --- p.5-2 / Chapter 5.2.2 --- Four Categories of Excessive Nodes --- p.5-3 / Chapter 5.2.3 --- Why are they excessive ? --- p.5-6 / Chapter 5.3 --- Pruning of Excessive Nodes --- p.5-9 / Chapter 5.4 --- Experimental Results and Performance Analysis --- p.5-13 / Chapter 5.5 --- Concluding Remarks --- p.5-19 / Chapter 6 --- DYNAMIC CONSTRUCTION OF BP NETWORKS / Chapter 6.1 --- A Hybrid Approach --- p.6-1 / Chapter 6.2 --- Experimental Results and Performance Analysis --- p.6-2 / Chapter 6.3 --- Concluding Remarks --- p.6-7 / Chapter 7 --- CONCLUSIONS --- p.7-1 / Chapter 7.1 --- Contributions --- p.7-1 / Chapter 7.2 --- Limitations and Suggestions for Further Research --- p.7-2 / REFERENCES --- p.R-l / APPENDIX / Chapter A.1 --- A Handwriting Numeral Recognition Experiment: Feature Extraction Technique and Sampling Process --- p.A-1 / Chapter A.2 --- Determining the distance d= δ2/2r in Lemma 1 --- p.A-2
|
2 |
Constructive neural networks : generalisation, convergence and architecturesTreadgold, Nicholas K., Computer Science & Engineering, Faculty of Engineering, UNSW January 1999 (has links)
Feedforward neural networks trained via supervised learning have proven to be successful in the field of pattern recognition. The most important feature of a pattern recognition technique is its ability to successfully classify future data. This is known as generalisation. A more practical aspect of pattern recognition methods is how quickly they can be trained and how reliably a good solution is found. Feedforward neural networks have been shown to provide good generali- sation on a variety of problems. A number of training techniques also exist that provide fast convergence. Two problems often addressed within the field of feedforward neural networks are how to improve thegeneralisation and convergence of these pattern recognition techniques. These two problems are addressed in this thesis through the frame- work of constructive neural network algorithms. Constructive neural networks are a type of feedforward neural network in which the network architecture is built during the training process. The type of architecture built can affect both generalisation and convergence speed. Convergence speed and reliability areimportant properties of feedforward neu- ral networks. These properties are studied by examining different training al- gorithms and the effect of using a constructive process. A new gradient based training algorithm, SARPROP, is introduced. This algorithm addresses the problems of poor convergence speed and reliability when using a gradient based training method. SARPROP is shown to increase both convergence speed and the chance of convergence to a good solution. This is achieved through the combination of gradient based and Simulated Annealing methods. The convergence properties of various constructive algorithms are examined through a series of empirical studies. The results of these studies demonstrate that the cascade architecture allows for faster, more reliable convergence using a gradient based method than a single layer architecture with a comparable num- ber of weights. It is shown that constructive algorithms that bias the search direction of the gradient based training algorithm for the newly added hidden neurons, produce smaller networks and more rapid convergence. A constructive algorithm using search direction biasing is shown to converge to solutions with networks that are unreliable and ine??cient to train using a non-constructive gradient based algorithm. The technique of weight freezing is shown to result in larger architectures than those obtained from training the whole network. Improving the generalisation ability of constructive neural networks is an im- portant area of investigation. A series of empirical studies are performed to examine the effect of regularisation on generalisation in constructive cascade al- gorithms. It is found that the combination of early stopping and regularisation results in better generalisation than the use of early stopping alone. A cubic regularisation term that greatly penalises large weights is shown to be benefi- cial for generalisation in cascade networks. An adaptive method of setting the regularisation magnitude in constructive networks is introduced and is shown to produce generalisation results similar to those obtained with a fixed, user- optimised regularisation setting. This adaptive method also oftenresults in the construction of smaller networks for more complex problems. The insights obtained from the SARPROP algorithm and from the convergence and generalisation empirical studies are used to create a new constructive cascade algorithm, acasper. This algorithm is extensively benchmarked and is shown to obtain good generalisation results in comparison to a number of well-respected and successful neural network algorithms. A technique of incorporating the validation data into the training set after network construction is introduced and is shown to generally result in similar or improved generalisation. The di??culties of implementing a cascade architecture in VLSI are described and results are given on the effect of the cascade architecture on such attributes as weight growth, fan-in, network depth, and propagation delay. Two variants of the cascade architecture are proposed. These new architectures are shown to produce similar generalisation results to the cascade architecture, while also addressing the problems of VLSI implementation of cascade networks.
|
3 |
The design and application of multi-layer neural networksHoskins, Bradley Graham January 1995 (has links)
Thesis (MEng in Electronic Engineering)--University of South Australia, 1995
|
4 |
Constructive neural networks : generalisation, convergence and architecturesTreadgold, Nicholas K., Computer Science & Engineering, Faculty of Engineering, UNSW January 1999 (has links)
Feedforward neural networks trained via supervised learning have proven to be successful in the field of pattern recognition. The most important feature of a pattern recognition technique is its ability to successfully classify future data. This is known as generalisation. A more practical aspect of pattern recognition methods is how quickly they can be trained and how reliably a good solution is found. Feedforward neural networks have been shown to provide good generali- sation on a variety of problems. A number of training techniques also exist that provide fast convergence. Two problems often addressed within the field of feedforward neural networks are how to improve thegeneralisation and convergence of these pattern recognition techniques. These two problems are addressed in this thesis through the frame- work of constructive neural network algorithms. Constructive neural networks are a type of feedforward neural network in which the network architecture is built during the training process. The type of architecture built can affect both generalisation and convergence speed. Convergence speed and reliability areimportant properties of feedforward neu- ral networks. These properties are studied by examining different training al- gorithms and the effect of using a constructive process. A new gradient based training algorithm, SARPROP, is introduced. This algorithm addresses the problems of poor convergence speed and reliability when using a gradient based training method. SARPROP is shown to increase both convergence speed and the chance of convergence to a good solution. This is achieved through the combination of gradient based and Simulated Annealing methods. The convergence properties of various constructive algorithms are examined through a series of empirical studies. The results of these studies demonstrate that the cascade architecture allows for faster, more reliable convergence using a gradient based method than a single layer architecture with a comparable num- ber of weights. It is shown that constructive algorithms that bias the search direction of the gradient based training algorithm for the newly added hidden neurons, produce smaller networks and more rapid convergence. A constructive algorithm using search direction biasing is shown to converge to solutions with networks that are unreliable and ine??cient to train using a non-constructive gradient based algorithm. The technique of weight freezing is shown to result in larger architectures than those obtained from training the whole network. Improving the generalisation ability of constructive neural networks is an im- portant area of investigation. A series of empirical studies are performed to examine the effect of regularisation on generalisation in constructive cascade al- gorithms. It is found that the combination of early stopping and regularisation results in better generalisation than the use of early stopping alone. A cubic regularisation term that greatly penalises large weights is shown to be benefi- cial for generalisation in cascade networks. An adaptive method of setting the regularisation magnitude in constructive networks is introduced and is shown to produce generalisation results similar to those obtained with a fixed, user- optimised regularisation setting. This adaptive method also oftenresults in the construction of smaller networks for more complex problems. The insights obtained from the SARPROP algorithm and from the convergence and generalisation empirical studies are used to create a new constructive cascade algorithm, acasper. This algorithm is extensively benchmarked and is shown to obtain good generalisation results in comparison to a number of well-respected and successful neural network algorithms. A technique of incorporating the validation data into the training set after network construction is introduced and is shown to generally result in similar or improved generalisation. The di??culties of implementing a cascade architecture in VLSI are described and results are given on the effect of the cascade architecture on such attributes as weight growth, fan-in, network depth, and propagation delay. Two variants of the cascade architecture are proposed. These new architectures are shown to produce similar generalisation results to the cascade architecture, while also addressing the problems of VLSI implementation of cascade networks.
|
5 |
The design and application of multi-layer neural networksHoskins, Bradley Graham January 1995 (has links)
Thesis (MEng in Electronic Engineering)--University of South Australia, 1995
|
6 |
Constructive neural networks : generalisation, convergence and architectures /Treadgold, Nicholas K. January 1999 (has links)
Thesis (Ph. D.)--University of New South Wales, 1999. / Also available online.
|
7 |
Micro-net the parallel path artificial neuron /Murray, Andrew Gerard William. January 2007 (has links)
Thesis (Ph.D) - Swinburne University of Technology, Faculty of Information & Communication Technologies, 2006. / A dissertation presented for the fulfilment of the requirements for the award of Doctor of Philosophy, Faculty of Information and Communication Technology, Swinburne University of Technology, 2007. Typescript. Includes bibliography.
|
8 |
A simple artificial neural network development system for study and research /Southworth, David, January 1991 (has links)
Report (M.S.)--Virginia Polytechnic Institute and State University. M.S. 1991. / Vita. Abstract. Includes bibliographical references (leaves 113-114). Also available via the Internet.
|
9 |
Temporal EKG signal classification using neural networks /Mohr, Sheila Jean. January 1991 (has links)
Project report (M. Eng.)--Virginia Polytechnic Institute and State University, 1991. / Abstract. Includes bibliographical references (leaf 79). Also available via the Internet.
|
10 |
Training and optimization of product unit neural networksIsmail, Adiel. January 2001 (has links)
Thesis (M. Sc.)(Computer Science)--University of Pretoria, 2001. / Summaries in Afrikaans and English.
|
Page generated in 0.0687 seconds