• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2691
  • 1221
  • 191
  • 181
  • 120
  • 59
  • 35
  • 27
  • 26
  • 25
  • 24
  • 21
  • 20
  • 20
  • 18
  • Tagged with
  • 5723
  • 5723
  • 2026
  • 1740
  • 1489
  • 1379
  • 1251
  • 1194
  • 1000
  • 755
  • 703
  • 673
  • 625
  • 534
  • 516
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Nonlinear behavior in small neural systems /

Wheeler, Diek Winters, January 1998 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 1998. / Vita. Includes bibliographical references (leaves 147-166). Available also in a digital version from Dissertation Abstracts.
82

From synapse to behaviour selective modulation of neuronal networks /

Goetz, Thomas. January 2008 (has links)
Thesis (Ph.D.)--Aberdeen University, 2008. / Title from web page (viewed on Mar. 2, 2009). Includes bibliographical references.
83

Causal pattern inference from neural spike train data /

Echtermeyer, Christoph. January 2009 (has links)
Thesis (Ph.D.) - University of St Andrews, October 2009.
84

Neural networks and shape identification an honors project /

Hansen, Andrew D. January 1900 (has links) (PDF)
Honors project (B.S.) -- Carson-Newman College, 2010. / Project advisor: Dr. Henry Suters. Includes bibliographical references (p. 40).
85

Developing neural network applications using LabVIEW

Pogula Sridhar, Sriram. January 2005 (has links)
Thesis (M.S.)--University of Missouri-Columbia, 2005. / The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (July 14, 2006). Includes bibliographical references.
86

A Dynamic Parameter Tuning Algorithm For Rbf Neural Networks

Li, Junxu January 1999 (has links) (PDF)
No description available.
87

Identification of robotic manipulators' inverse dynamics coefficients via model-based adaptive networks

Hay, Robert James January 1998 (has links)
The values of a given manipulator's dynamics coefficients need to be accurately identified in order to employ model-based algorithms in the control of its motion. This thesis details the development of a novel form of adaptive network which is capable of accurately learning the coefficients of systems, such as manipulator inverse dynamics, where the algebraic form is known but the coefficients' values are not. Empirical motion data from a pair of PUMA 560s has been processed by the Context-Sensitive Linear Combiner (CSLC) network developed, and the coefficients of their inverse dynamics identified. The resultant precision of control is shown to be superior to that achieved from employing dynamics coefficients derived from direct measurement. As part of the development of the CSLC network, the process of network learning is examined. This analysis reveals that current network architectures for processing analogue output systems with high input order are highly unlikely to produce solutions that are good estimates throughout the entire problem space. In contrast, the CSLC network is shown to generalise intrinsically as a result of its structure, whilst its training is greatly simplified by the presence of only one minima in the network's error hypersurface. Furthermore, a fine-tuning algorithm for network training is presented which takes advantage of the CSLC network's single adaptive layer structure and does not rely upon gradient descent of the network error hypersurface, which commonly slows the later stages of network training.
88

Optimization of salbutamol sulfate dissolution from sustained release matrix formulations using an artificial neural network

Chaibva, F A, Burton, M, Walker, Roderick January 2010 (has links)
An artificial neural network was used to optimize the release of salbutamol sulfate from hydrophilic matrix formulations. Model formulations to be used for training, testing and validating the neural network were manufactured with the aid of a central composite design with varying the levels of Methocel® K100M, xanthan gum, Carbopol® 974P and Surelease® as the input factors. In vitro dissolution time profiles at six different sampling times were used as target data in training the neural network for formulation optimization. A multi layer perceptron with one hidden layer was constructed using Matlab®, and the number of nodes in the hidden layer was optimized by trial and error to develop a model with the best predictive ability. The results revealed that a neural network with nine nodes was optimal for developing and optimizing formulations. Simulations undertaken with the training data revealed that the constructed model was useable. The optimized neural network was used for optimization of formulation with desirable release characteristics and the results indicated that there was agreement between the predicted formulation and the manufactured formulation. This work illustrates the possible utility of artificial neural networks for the optimization of pharmaceutical formulations with desirable performance characteristics.
89

Design and application of neurocomputers

Naylor, David C. J. January 1994 (has links)
This thesis aims to understand how to design high performance, flexible and cost effective neural computing systems and apply them to a variety of real-time applications. Systems of this type already exist for the support of a range of ANN models. However, many of these designs have concentrated on optimising the architecture of the neural processor and have generally neglected other important aspects. If these neural systems are to be of practical benefit to researchers and allow complex neural problems to be solved efficiently, all aspects of their design must be addressed.
90

Modular connectionist architectures and the learning of quantification skills

Bale, Tracey Ann January 1998 (has links)
Modular connectionist systems comprise autonomous, communicating modules, achieving a behaviour more complex than that of a single neural network. The component modules, possibly of different topologies, may operate under various learning algorithms. Some modular connectionist systems are constrained at the representational level, in that the connectivity of the modules is hard-wired by the modeller; others are constrained at an architectural level, in that the modeller explicitly allocates each module to a specific subtask. Our approach aims to minimise these constraints, thus reducing the bias possibly introduced by the modeller. This is achieved, in the first case, through the introduction of adaptive connection weights and, in the second, by the automatic allocation of modules to subtasks as part of the learning process. The efficacy of a minimally constrained system, with respect to representation and architecture, is demonstrated by a simulation of numerical development amongst children. The modular connectionist system MASCOT (Modular Architecture for Subitising and Counting Over Time) is a dual-routed model simulating the quantification abilities of subitising and counting. A gating network learns to integrate the outputs of the two routes in determining the final output of the system. MASCOT simulates subitising through a numerosity detection system comprising modules with adaptive weights that self-organise over time. The effectiveness of MASCOT is demonstrated in that the distance effect and Fechner's law for numbers are seen to be consequences of this learning process. The automatic allocation of modules to subtasks is illustrated in a simulation of learning to count. Introducing feedback into one of two competing expert networks enables a mixture-of-experts model to perform decomposition of a task into static and temporal subtasks, and to allocate appropriate expert networks to those subtasks. MASCOT successfully performs decomposition of the counting task with a two-gated mixture-of-experts model and exhibits childlike counting errors.

Page generated in 0.0507 seconds