• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 780
  • 103
  • 65
  • 32
  • 18
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 9
  • 4
  • 3
  • 2
  • Tagged with
  • 1136
  • 1136
  • 1136
  • 1136
  • 252
  • 152
  • 135
  • 135
  • 125
  • 121
  • 109
  • 109
  • 109
  • 108
  • 103
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Nonlinear behavior in small neural systems /

Wheeler, Diek Winters, January 1998 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 1998. / Vita. Includes bibliographical references (leaves 147-166). Available also in a digital version from Dissertation Abstracts.
12

Neural networks and shape identification an honors project /

Hansen, Andrew D. January 1900 (has links) (PDF)
Honors project (B.S.) -- Carson-Newman College, 2010. / Project advisor: Dr. Henry Suters. Includes bibliographical references (p. 40).
13

Developing neural network applications using LabVIEW

Pogula Sridhar, Sriram. January 2005 (has links)
Thesis (M.S.)--University of Missouri-Columbia, 2005. / The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (July 14, 2006). Includes bibliographical references.
14

A Dynamic Parameter Tuning Algorithm For Rbf Neural Networks

Li, Junxu January 1999 (has links) (PDF)
No description available.
15

Optimization of salbutamol sulfate dissolution from sustained release matrix formulations using an artificial neural network

Chaibva, F A, Burton, M, Walker, Roderick January 2010 (has links)
An artificial neural network was used to optimize the release of salbutamol sulfate from hydrophilic matrix formulations. Model formulations to be used for training, testing and validating the neural network were manufactured with the aid of a central composite design with varying the levels of Methocel® K100M, xanthan gum, Carbopol® 974P and Surelease® as the input factors. In vitro dissolution time profiles at six different sampling times were used as target data in training the neural network for formulation optimization. A multi layer perceptron with one hidden layer was constructed using Matlab®, and the number of nodes in the hidden layer was optimized by trial and error to develop a model with the best predictive ability. The results revealed that a neural network with nine nodes was optimal for developing and optimizing formulations. Simulations undertaken with the training data revealed that the constructed model was useable. The optimized neural network was used for optimization of formulation with desirable release characteristics and the results indicated that there was agreement between the predicted formulation and the manufactured formulation. This work illustrates the possible utility of artificial neural networks for the optimization of pharmaceutical formulations with desirable performance characteristics.
16

Recurrent neural networks in the chemical process industries

Lourens, Cecil Albert 04 September 2012 (has links)
M.Ing. / This dissertation discusses the results of a literature survey into the theoretical aspects and development of recurrent neural networks. In particular, the various architectures and training algorithms developed for recurrent networks are discussed. The various characteristics of importance for the efficient implementation of recurrent neural networks to model dynamical nonlinear processes have also been investigated and are discussed. Process control has been identified as a field of application where recurrent networks may play an important role. The model based adaptive control strategy is briefly introduced and the application of recurrent networks to both the direct- and the indirect adaptive control strategy highlighted. In conclusion, the important areas of future research for the successful implementation of recurrent networks in adaptive nonlinear control are identified
17

Formation of the complex neural networks under multiple constraints

Chen, Yuhan 01 January 2013 (has links)
No description available.
18

Empirical analysis of neural networks training optimisation

Kayembe, Mutamba Tonton January 2016 (has links)
A Dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science in Mathematical Statistics,School of Statistics and Actuarial Science. October 2016. / Neural networks (NNs) may be characterised by complex error functions with attributes such as saddle-points, local minima, even-spots and plateaus. This complicates the associated training process in terms of efficiency, convergence and accuracy given that it is done by minimising such complex error functions. This study empirically investigates the performance of two NNs training algorithms which are based on unconstrained and global optimisation theories, i.e. the Resilient propagation (Rprop) and the Conjugate Gradient with Polak-Ribière updates (CGP). It also shows how the network structure plays a role in the training optimisation of NNs. In this regard, various training scenarios are used to classify two protein data, i.e. the Escherichia coli and Yeast data. These training scenarios use varying numbers of hidden nodes and training iterations. The results show that Rprop outperforms CGP. Moreover, it appears that the performance of classifiers varies under various training scenarios. / LG2017
19

Initialising neural networks with prior knowledge

Rountree, Nathan, n/a January 2007 (has links)
This thesis explores the relationship between two classification models: decision trees and multilayer perceptrons. Decision trees carve up databases into box-shaped regions, and make predictions based on the majority class in each box. They are quick to build and relatively easy to interpret. Multilayer perceptrons (MLPs) are often more accurate than decision trees, because they are able to use soft, curved, arbitrarily oriented decision boundaries. Unfortunately MLPs typically require a great deal of effort to determine a good number and arrangement of neural units, and then require many passes through the database to determine a good set of connection weights. The cost of creating and training an MLP is thus hundreds of times greater than the cost of creating a decision tree, for perhaps only a small gain in accuracy. The following scheme is proposed for reducing the computational cost of creating and training MLPs. First, build and prune a decision tree to generate prior knowledge of the database. Then, use that knowledge to determine the initial architecture and connection weights of an MLP. Finally, use a training algorithm to refine the knowledge now embedded in the MLP. This scheme has two potential advantages: a suitable neural network architecture is determined very quickly, and training should require far fewer passes through the data. In this thesis, new algorithms for initialising MLPs from decision trees are developed. The algorithms require just one traversal of a decision tree, and produce four-layer MLPs with the same number of hidden units as there are nodes in the tree. The resulting MLPs can be shown to reach a state more accurate than the decision trees that initialised them, in fewer training epochs than a standard MLP. Employing this approach typically results in MLPs that are just as accurate as standard MLPs, and an order of magnitude cheaper to train.
20

Neural network ensonification emulation : training and application /

Jung, Jae-Byung. January 2001 (has links)
Thesis (Ph. D.)--University of Washington, 2001. / Vita. Includes bibliographical references (leaves 59-63).

Page generated in 0.1326 seconds