Spelling suggestions: "subject:"beural"" "subject:"aneural""
421 |
NRAGE regulates life and death of neural progenitors /Kendall, Stephen E., January 2004 (has links) (PDF)
Thesis (Ph.D.) in Biochemistry and Molecular Biology--University of Maine, 2004. / Includes vita. Includes bibliographical references (leaves 133-183).
|
422 |
Dynamics of dressed neurons modeling the neural-glial circuit and exploring its normal and pathological implications /Nadkarni, Suhita. January 2005 (has links)
Thesis (Ph.D.)--Ohio University, June, 2005. / Title from PDF t.p. Includes bibliographical references (p. 130-137)
|
423 |
Integration of statistical and neural network method for data analysisChavali, Krishna Kumar. January 2006 (has links)
Thesis (M.S.)--West Virginia University, 2006. / Title from document title page. Document formatted into pages; contains viii, 68 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 50-51).
|
424 |
A three-dimensional copuled microelectrode and microfluidic array for neuronal interfacingChoi, Yoonsu. January 2005 (has links)
Thesis (Ph. D.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2006. / Michaels, Thomas E., Committee Member ; LaPlaca, Michelle, Committee Member ; Frazier, A. Bruno, Committee Member ; DeWeerth, Stephen P., Committee Member ; Allen, Mark G., Committee Chair.
|
425 |
NRAGE Regulates Life and Death of Neural ProgenitorsKendall, Stephen E. January 2004 (has links) (PDF)
No description available.
|
426 |
The stimulus router system novel neural prosthesis /Gan, Liu Shi. January 2009 (has links)
Thesis (Ph.D.)--University of Alberta, 2009. / A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree of Doctor of Philosophy, Medical Sciences - Biomedical Engineering. Title from pdf file main screen (viewed on November 14, 2009). Includes bibliographical references.
|
427 |
Neural networks applications in estimating construction costs /Rouhana, Khalil G., January 1994 (has links)
Thesis (M.S.)--Virginia Polytechnic Institute and State University, 1994. / Vita. Abstract. Includes bibliographical references (leaves 155-159). Also available via the Internet.
|
428 |
Σύγκριση μεθόδων εκπαίδευσης τεχνητών νευρωνικών δικτύωνΣταθοπούλου, Δήμητρα 16 June 2011 (has links)
Η παρούσα διπλωματική εργασία αποσκοπεί στη μελέτη και την εκπαίδευση των Τεχνητών Νευρωνικών Δικτύων με τη βοήθεια γνωστών μεθόδων, όπως τη μέθοδο όπισθεν διάδοσης σφάλματος (back-propagation), τη μέθοδο adaptive back-propagation, τη momentum back-propagtion και την resilient propagation (RPROP) και τη σύγκριση αυτών.
Κατά τη διάρκεια αυτή της εργασίας παρουσιάσαμε τις βασικές αρχιτεκτονικές των Τεχνητών Νευρωνικών Δικτύων και τις διάφορες μεθόδους εκπαίδευσης τους. Μελετήσαμε τις τεχνικές βελτιστοποίησης απόδοσης ενός δικτύου με τη χρήση αλγορίθμων και περιγράψαμε τη γνωστή μέθοδο όπισθεν διάδοσης σφάλματος (back-propagation) αλλά και παραλλαγές αυτής.
Τέλος, δώσαμε τα πειραματικά αποτελέσματα από την εκπαίδευση των Τεχνητών Νευρωνικών Δικτύων, με τη βοήθεια των παραπάνω αλγορίθμων, σε πολύ γνωστά και ευρέως χρησιμοποιημένα χαρακτηριστικά προβλήματα πραγματικού κόσμου. / The purpose of this thesis is to study and train Artificial Neural Networks with the help of well-known methods, such as the back-propagation method, the adaptive back-propagation method, the momentum back-propagation method and the resilient propagation (RPROP) method, and also to implement a comparison between them.
During this project we introduced the basic architectures of Artificial Neural Networks and the various methods for their training. We studied the techniques for optimizing the performance of a network with the use of algorithms and described the well-known back-propagation method but also her variations.
Finally, we gave the experimental results from the training of the Neural Networks, with the help of the previous mentioned algorithms, in widely known and commonly used characteristical problems of the real world.
|
429 |
Robustness and generalisation : tangent hyperplanes and classification treesFernandes, Antonio Ramires January 1997 (has links)
The issue of robust training is tackled for fixed multilayer feedforward architectures. Several researchers have proved the theoretical capabilities of Multilayer Feedforward networks but in practice the robust convergence of standard methods like standard backpropagation, conjugate gradient descent and Quasi-Newton methods may be poor for various problems. It is suggested that the common assumptions about the overall surface shape break down when many individual component surfaces are combined and robustness suffers accordingly. A new method to train Multilayer Feedforward networks is presented in which no particular shape is assumed for the surface and where an attempt is made to optimally combine the individual components of a solution for the overall solution. The method is based on computing Tangent Hyperplanes to the non-linear solution manifolds. At the core of the method is a mechanism to minimise the sum of squared errors and as such its use is not limited to Neural Networks. The set of tests performed for Neural Networks show that the method is very robust regarding convergence of training and has a powerful ability to find good directions in weight space. Generalisation is also a very important issue in Neural Networks and elsewhere. Neural Networks are expected to provide sensible outputs for unseen inputs. A framework for hyperplane based classifiers is presented for improving average generalisation. The framework attempts to establish a trained boundary so that there is an optimal overall spacing from the boundary to training points closest to this boundary. The framework is shown to provide results consistent with the theoretical expectations.
|
430 |
Using constraints to improve generalisation and training of feedforward neural networks : constraint based decomposition and complex backpropagationDraghici, Sorin January 1996 (has links)
Neural networks can be analysed from two points of view: training and generalisation. The training is characterised by a trade-off between the 'goodness' of the training algorithm itself (speed, reliability, guaranteed convergence) and the 'goodness' of the architecture (the difficulty of the problems the network can potentially solve). Good training algorithms are available for simple architectures which cannot solve complicated problems. More complex architectures, which have been shown to be able to solve potentially any problem do not have in general simple and fast algorithms with guaranteed convergence and high reliability. A good training technique should be simple, fast and reliable, and yet also be applicable to produce a network able to solve complicated problems. The thesis presents Constraint Based Decomposition (CBD) as a technique which satisfies the above requirements well. CBD is shown to build a network able to solve complicated problems in a simple, fast and reliable manner. Furthermore, the user is given a better control over the generalisation properties of the trained network with respect to the control offered by other techniques. The generalisation issue is addressed, as well. An analysis of the meaning of the term "good generalisation" is presented and a framework for assessing generalisation is given: the generalisation can be assessed only with respect to a known or desired underlying function. The known properties of the underlying function can be embedded into the network thus ensuring a better generalisation for the given problem. This is the fundamental idea of the complex backpropagation network. This network can associate signals through associating some of their parameters using complex weights. It is shown that such a network can yield better generalisation results than a standard backpropagation network associating instantaneous values.
|
Page generated in 0.0481 seconds