• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5135
  • 1981
  • 420
  • 367
  • 312
  • 100
  • 73
  • 68
  • 66
  • 63
  • 56
  • 50
  • 44
  • 43
  • 39
  • Tagged with
  • 10696
  • 5794
  • 2836
  • 2720
  • 2637
  • 2388
  • 1655
  • 1614
  • 1545
  • 1523
  • 1336
  • 1113
  • 1030
  • 930
  • 897
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Novel synaptic mechanisms of the cerebellum

Batchelor, Andrew Mollison January 1993 (has links)
No description available.
92

Automatic pattern recognition and learning for information systems

Brückner, Jörg January 1995 (has links)
No description available.
93

Dynamic construction of back-propagation artificial neural networks.

January 1991 (has links)
by Korris Fu-lai Chung. / Thesis (M.Phil.) -- Chinese University of Hong Kong, 1991. / Bibliography: leaves R-1 - R-5. / LIST OF FIGURES --- p.vi / LIST OF TABLES --- p.viii / Chapter 1 --- INTRODUCTION / Chapter 1.1 --- Recent Resurgence of Artificial Neural Networks --- p.1-1 / Chapter 1.2 --- A Design Problem in Applying Back-Propagation Networks --- p.1-4 / Chapter 1.3 --- Related Works --- p.1-6 / Chapter 1.4 --- Objective of the Research --- p.1-8 / Chapter 1.5 --- Thesis Organization --- p.1-9 / Chapter 2 --- MULTILAYER FEEDFORWARD NETWORKS (MFNs) AND BACK-PRO- PAGATION (BP) LEARNING ALGORITHM / Chapter 2.1 --- Introduction --- p.2-1 / Chapter 2.2 --- From Perceptrons to MFNs --- p.2-2 / Chapter 2.3 --- From Delta Rule to BP Algorithm --- p.2-6 / Chapter 2.4 --- A Variant of BP Algorithm --- p.2-12 / Chapter 3 --- INTERPRETATIONS AND PROPERTIES OF BP NETWORKS / Chapter 3.1 --- Introduction --- p.3-1 / Chapter 3.2 --- A Pattern Classification View on BP Networks --- p.3-2 / Chapter 3.2.1 --- Pattern Space Interpretation of BP Networks --- p.3-2 / Chapter 3.2.2 --- Weight Space Interpretation of BP Networks --- p.3-3 / Chapter 3.3 --- Local Minimum --- p.3-5 / Chapter 3.4 --- Generalization --- p.3-6 / Chapter 4 --- GROWTH OF BP NETWORKS / Chapter 4.1 --- Introduction --- p.4-1 / Chapter 4.2 --- Problem Formulation --- p.4-1 / Chapter 4.3 --- Learning an Additional Pattern --- p.4-2 / Chapter 4.4 --- A Progressive Training Algorithm --- p.4-4 / Chapter 4.5 --- Experimental Results and Performance Analysis --- p.4-7 / Chapter 4.6 --- Concluding Remarks --- p.4-16 / Chapter 5 --- PRUNING OF BP NETWORKS / Chapter 5.1 --- Introduction --- p.5-1 / Chapter 5.2 --- Characteristics of Hidden Nodes in Oversized Networks --- p.5-2 / Chapter 5.2.1 --- Observations from an Empirical Study --- p.5-2 / Chapter 5.2.2 --- Four Categories of Excessive Nodes --- p.5-3 / Chapter 5.2.3 --- Why are they excessive ? --- p.5-6 / Chapter 5.3 --- Pruning of Excessive Nodes --- p.5-9 / Chapter 5.4 --- Experimental Results and Performance Analysis --- p.5-13 / Chapter 5.5 --- Concluding Remarks --- p.5-19 / Chapter 6 --- DYNAMIC CONSTRUCTION OF BP NETWORKS / Chapter 6.1 --- A Hybrid Approach --- p.6-1 / Chapter 6.2 --- Experimental Results and Performance Analysis --- p.6-2 / Chapter 6.3 --- Concluding Remarks --- p.6-7 / Chapter 7 --- CONCLUSIONS --- p.7-1 / Chapter 7.1 --- Contributions --- p.7-1 / Chapter 7.2 --- Limitations and Suggestions for Further Research --- p.7-2 / REFERENCES --- p.R-l / APPENDIX / Chapter A.1 --- A Handwriting Numeral Recognition Experiment: Feature Extraction Technique and Sampling Process --- p.A-1 / Chapter A.2 --- Determining the distance d= δ2/2r in Lemma 1 --- p.A-2
94

Neural Reuse and the Evolution of Higher Cognition

Brigham, Andrew 01 May 2019 (has links)
Harvard psychologist Steven Pinker recently examined a problem with understanding human cognition, particularly how the processes of biological evolution could explain the human ability to think abstractly, including the higher cognitive abilities for logic and math (hereafter, HCAs). Pinker credits the formulation of the problem of understanding human cognition and the evolutionary development of HCAs to the co-discoverer of evolution by natural selection, Alfred Russell Wallace. Pinker states the following response to the question raised by Wallace: "…Nonetheless it is appropriate to engage the profound puzzle [Wallace] raised; namely, why do humans have the ability to pursue abstract intellectual feats such as science, mathematics, philosophy, and law, given that opportunities to exercise these talents did not exist in the foraging lifestyle in which humans evolved and would not have parlayed themselves into advantages in survival and reproduction even if they did?" Wallace claimed that while ancestral cognitive operations, such as those operations for perception and motor control, were the product of evolution, he disagreed with Charles Darwin’s view that HCAs are the product of evolution by natural selection. Wallace is not the only one to doubt that HCAs are the product of evolution. Contemporary philosopher Thomas Nagel also rejects the view that HCAs are the product of evolution. Comparable to Wallace, although Nagel accepts that older operations of the brain, such as perception and motor control, are the product of evolution, Nagel denies that higher types of cognitive operations are the product of evolution. The aim of this dissertation is to argue that HCAs are the product of evolutionary processes, both natural selection and other mechanisms of change. The reason HCAs are the product of evolution is because HCAs are carried out by the neural reuse of older evolved brain regions. Neural reuse is the view that brain regions can be recruited for multiple cognitive uses. Ancestral brain regions, such as regions for perceptual and motor functions, can be reused for carrying out HCAs, such as language, logic, and math.
95

Bidirectional interaction between endocannabinoid and retinoid signalling pathways in the brain

Bu Saeed, Reem Bakr January 2018 (has links)
No description available.
96

The genetics of neural tube defects and twinning /

Garabedian, Berdj Hratchia January 1992 (has links)
No description available.
97

Constructive neural networks : generalisation, convergence and architectures

Treadgold, Nicholas K., Computer Science & Engineering, Faculty of Engineering, UNSW January 1999 (has links)
Feedforward neural networks trained via supervised learning have proven to be successful in the field of pattern recognition. The most important feature of a pattern recognition technique is its ability to successfully classify future data. This is known as generalisation. A more practical aspect of pattern recognition methods is how quickly they can be trained and how reliably a good solution is found. Feedforward neural networks have been shown to provide good generali- sation on a variety of problems. A number of training techniques also exist that provide fast convergence. Two problems often addressed within the field of feedforward neural networks are how to improve thegeneralisation and convergence of these pattern recognition techniques. These two problems are addressed in this thesis through the frame- work of constructive neural network algorithms. Constructive neural networks are a type of feedforward neural network in which the network architecture is built during the training process. The type of architecture built can affect both generalisation and convergence speed. Convergence speed and reliability areimportant properties of feedforward neu- ral networks. These properties are studied by examining different training al- gorithms and the effect of using a constructive process. A new gradient based training algorithm, SARPROP, is introduced. This algorithm addresses the problems of poor convergence speed and reliability when using a gradient based training method. SARPROP is shown to increase both convergence speed and the chance of convergence to a good solution. This is achieved through the combination of gradient based and Simulated Annealing methods. The convergence properties of various constructive algorithms are examined through a series of empirical studies. The results of these studies demonstrate that the cascade architecture allows for faster, more reliable convergence using a gradient based method than a single layer architecture with a comparable num- ber of weights. It is shown that constructive algorithms that bias the search direction of the gradient based training algorithm for the newly added hidden neurons, produce smaller networks and more rapid convergence. A constructive algorithm using search direction biasing is shown to converge to solutions with networks that are unreliable and ine??cient to train using a non-constructive gradient based algorithm. The technique of weight freezing is shown to result in larger architectures than those obtained from training the whole network. Improving the generalisation ability of constructive neural networks is an im- portant area of investigation. A series of empirical studies are performed to examine the effect of regularisation on generalisation in constructive cascade al- gorithms. It is found that the combination of early stopping and regularisation results in better generalisation than the use of early stopping alone. A cubic regularisation term that greatly penalises large weights is shown to be benefi- cial for generalisation in cascade networks. An adaptive method of setting the regularisation magnitude in constructive networks is introduced and is shown to produce generalisation results similar to those obtained with a fixed, user- optimised regularisation setting. This adaptive method also oftenresults in the construction of smaller networks for more complex problems. The insights obtained from the SARPROP algorithm and from the convergence and generalisation empirical studies are used to create a new constructive cascade algorithm, acasper. This algorithm is extensively benchmarked and is shown to obtain good generalisation results in comparison to a number of well-respected and successful neural network algorithms. A technique of incorporating the validation data into the training set after network construction is introduced and is shown to generally result in similar or improved generalisation. The di??culties of implementing a cascade architecture in VLSI are described and results are given on the effect of the cascade architecture on such attributes as weight growth, fan-in, network depth, and propagation delay. Two variants of the cascade architecture are proposed. These new architectures are shown to produce similar generalisation results to the cascade architecture, while also addressing the problems of VLSI implementation of cascade networks.
98

Discovery of the novel mouFSnrp gene and the characterisation of its in situ expression profile during mouse neurogenesis

Bradoo, Privahini January 2007 (has links)
Recently, a novel protein family, named as neural regeneration peptides (NRPs), was predicted across the rat, human and mouse genomes by one of my supervisors, Dr. Sieg. Synthetic forms of these proteins have been previously shown to act as potent neuronal chemoattractants and have a major role in neural regeneration. In light of these properties, these peptides are key candidates for drug development against an array of neurodegenerative disorders. The aim of this PhD project was to provide confirmation of the existence of a member of the NRP coding gene family, annotated in the mouse genome. This gene, called mouse frameshift nrp (mouFSnrp), was hypothesised exist as a -1bp frameshift to another predicted gene AlkB. This project involved the identification of the mouFSnrp gene, and the characterisation of its expression pattern and ontogeny during mouse neural development. Through the work described in this thesis, the mouFSnrp gene was identified in mouse embryonic cortical cultures and its protein coding gene sequence was verified. mouFSnrp expression was shown to be present in neural as well as non-neural tissues, via RT-PCR. Using non-radioactive in situ hybridisation and immunohistochemical colocalisation studies, interesting insights into the lineage and ontogeny of mouFSnrp expression during brain development were revealed. These results indicate that mouFSnrp expression originates in neural stem cells of the developing cortex, and appears to be preferentially continued via the radial glial lineage. mouFSnrp expression is carried forward via the neurogenic radial glia into their daughter neuronal progeny as well as postnatal astrocyte. In the postnatal brain, mouFSnrp gene transcripts were also observed in the olfactory bulb and the hippocampus, both of which are known to have high neurogenic potential. In general, the radial glial related nature of mouFSnrp expression appears to be a hallmark of the mouFSnrp expression pattern through out neural development. This thesis provides the first confirmation of the existence of a completely novel gene, mouFSnrp, and its putative -1 translational frameshifting structure. Further, preliminary data presented in this thesis regarding the mouFSnrp in situ expression pattern during mouse brain development may suggest a key role of the gene in neuronal migration and neurogenesis in mice. / FRST Bright Futures Enterprise Fellowship
99

A physiologically realistic neural network model of visual updating across 3-D eye movements /

Keith, Gerald Phillip. January 2004 (has links)
Thesis (M.A.)--York University, 2004. Graduate Programme in Psychology. / Typescript. Includes bibliographical references (leaves 146-156). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: LINK NOT YET AVAILABLE.
100

Structural Impairment Detection Using Arrays of Competitive Artificial Neural Networks

Story, Brett 2012 May 1900 (has links)
Aging railroad bridge infrastructure is subject to increasingly higher demands such as heavier loads, increased speed, and increased frequency of traffic. The challenges facing railroad bridge infrastructure provide an opportunity to develop improved systems of monitoring railroad bridges. This dissertation outlines the development and implementation of a Structural Impairment Detection System (SIDS) that incorporates finite element modeling and instrumentation of a testbed structure, neural algorithm development, and the integration of data acquisition and impairment detection tools. Ultimately, data streams from the Salmon Bay Bridge are autonomously recorded and interrogated by competitive arrays of artificial neural networks for patterns indicative of specific structural impairments. Heel trunnion bascule bridges experience significant stress ranges in critical truss members. Finite element modeling of the Salmon Bay Bridge testbed provided an estimate of nominal structural behavior and indicated types and locations of possible impairments. Analytical modeling was initially performed in SAP2000 and then refined with ABAQUS. Modeling results from the Salmon Bay Bridge were used to determine measureable quantities sensitive to modeled impairments. An instrumentation scheme was designed and installed on the testbed to record these diagnostically significant data streams. Analytical results revealed that main chord members and bracing members of the counterweight truss are sensitive to modeled structural impairments. Finite element models and experimental observations indicated maximum stress ranges of approximately 22 ksi on main chord members of the counterweight truss. A competitive neural algorithm was developed to examine analytical and experimental data streams. Analytical data streams served as training vectors for training arrays of competitive neural networks. A quasi static array of neural networks was developed to provide an indication of the operating condition at specific intervals of the bridge's operation. Competitive neural algorithms correctly classified 94% of simulated data streams. Finally, a stand-alone application was integrated with the Salmon Bay Bridge data acquisition system to autonomously analyze recorded data streams and produce bridge condition reports. Based on neural algorithms trained on modeled impairments, the Salmon Bay Bridge operates in a manner most resembling one of two operating conditions: 1) unimpaired, or 2) impaired embedded member at the southeast corner of the counterweight.

Page generated in 0.0523 seconds