• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 788
  • 117
  • 65
  • 34
  • 18
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 9
  • 4
  • 3
  • 2
  • Tagged with
  • 1159
  • 1159
  • 1159
  • 1136
  • 256
  • 154
  • 141
  • 139
  • 129
  • 123
  • 123
  • 123
  • 121
  • 108
  • 105
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Nonlinear behavior in small neural systems /

Wheeler, Diek Winters, January 1998 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 1998. / Vita. Includes bibliographical references (leaves 147-166). Available also in a digital version from Dissertation Abstracts.
12

Neural networks and shape identification an honors project /

Hansen, Andrew D. January 1900 (has links) (PDF)
Honors project (B.S.) -- Carson-Newman College, 2010. / Project advisor: Dr. Henry Suters. Includes bibliographical references (p. 40).
13

Developing neural network applications using LabVIEW

Pogula Sridhar, Sriram. January 2005 (has links)
Thesis (M.S.)--University of Missouri-Columbia, 2005. / The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (July 14, 2006). Includes bibliographical references.
14

A Dynamic Parameter Tuning Algorithm For Rbf Neural Networks

Li, Junxu January 1999 (has links) (PDF)
No description available.
15

Optimization of salbutamol sulfate dissolution from sustained release matrix formulations using an artificial neural network

Chaibva, F A, Burton, M, Walker, Roderick January 2010 (has links)
An artificial neural network was used to optimize the release of salbutamol sulfate from hydrophilic matrix formulations. Model formulations to be used for training, testing and validating the neural network were manufactured with the aid of a central composite design with varying the levels of Methocel® K100M, xanthan gum, Carbopol® 974P and Surelease® as the input factors. In vitro dissolution time profiles at six different sampling times were used as target data in training the neural network for formulation optimization. A multi layer perceptron with one hidden layer was constructed using Matlab®, and the number of nodes in the hidden layer was optimized by trial and error to develop a model with the best predictive ability. The results revealed that a neural network with nine nodes was optimal for developing and optimizing formulations. Simulations undertaken with the training data revealed that the constructed model was useable. The optimized neural network was used for optimization of formulation with desirable release characteristics and the results indicated that there was agreement between the predicted formulation and the manufactured formulation. This work illustrates the possible utility of artificial neural networks for the optimization of pharmaceutical formulations with desirable performance characteristics.
16

Recurrent neural networks in the chemical process industries

Lourens, Cecil Albert 04 September 2012 (has links)
M.Ing. / This dissertation discusses the results of a literature survey into the theoretical aspects and development of recurrent neural networks. In particular, the various architectures and training algorithms developed for recurrent networks are discussed. The various characteristics of importance for the efficient implementation of recurrent neural networks to model dynamical nonlinear processes have also been investigated and are discussed. Process control has been identified as a field of application where recurrent networks may play an important role. The model based adaptive control strategy is briefly introduced and the application of recurrent networks to both the direct- and the indirect adaptive control strategy highlighted. In conclusion, the important areas of future research for the successful implementation of recurrent networks in adaptive nonlinear control are identified
17

Formation of the complex neural networks under multiple constraints

Chen, Yuhan 01 January 2013 (has links)
No description available.
18

Analysis of electrocardiograms using artificial neural networks

Hedén, Bo. January 1997 (has links)
Thesis (doctoral)--Lund University, 1997. / Added t.p. with thesis statement inserted.
19

Analysis of electrocardiograms using artificial neural networks

Hedén, Bo. January 1997 (has links)
Thesis (doctoral)--Lund University, 1997. / Added t.p. with thesis statement inserted.
20

Empirical analysis of neural networks training optimisation

Kayembe, Mutamba Tonton January 2016 (has links)
A Dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science in Mathematical Statistics,School of Statistics and Actuarial Science. October 2016. / Neural networks (NNs) may be characterised by complex error functions with attributes such as saddle-points, local minima, even-spots and plateaus. This complicates the associated training process in terms of efficiency, convergence and accuracy given that it is done by minimising such complex error functions. This study empirically investigates the performance of two NNs training algorithms which are based on unconstrained and global optimisation theories, i.e. the Resilient propagation (Rprop) and the Conjugate Gradient with Polak-Ribière updates (CGP). It also shows how the network structure plays a role in the training optimisation of NNs. In this regard, various training scenarios are used to classify two protein data, i.e. the Escherichia coli and Yeast data. These training scenarios use varying numbers of hidden nodes and training iterations. The results show that Rprop outperforms CGP. Moreover, it appears that the performance of classifiers varies under various training scenarios. / LG2017

Page generated in 0.0925 seconds