• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 7
  • 7
  • 7
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Artificial neural nets: a critical analysis of their effectiveness as empirical technique for cognitive modelling.

Krebs, Peter Rudolf, School of History & Philosophy of Science, UNSW January 2007 (has links)
This thesis is concerned with the computational modelling and simulation of physiological structures and cognitive functions of brains through the use of artificial neural nets. While the structures of these models are loosely related to neurons and physiological structures observed in brains, the extent to which we can accept claims about how neurons and brains really function based on such models depends largely on judgments about the fitness of (virtual) computer experiments as empirical evidence. The thesis examines the computational foundations of neural models, neural nets, and some computational models of higher cognitive functions in terms of their ability to provide empirical support for theories within the framework of Parallel Distributed Processing (PDP). Models of higher cognitive functions in this framework are often presented in forms that hybridise top-down (e.g. employing terminology from Psychology or Linguistics) and bottom-up (neurons and neural circuits) approaches to cognition. In this thesis I argue that the use of terminology from either approach can blind us to the highly theory-laden nature of the models, and that this tends to produce overly optimistic evaluations of the empirical value of computer experiments on these models. I argue, further, that some classes of computational models and simulations based on methodologies that hybridise top-down and bottom-up approaches are ill-designed. Consequently, many of the theoretical claims based on these models cannot be supported by experiments with such models. As a result, I question the effectiveness of computer experiments with artificial neural nets as an empirical technique for cognitive modelling.
2

Artificial neural nets: a critical analysis of their effectiveness as empirical technique for cognitive modelling.

Krebs, Peter Rudolf, School of History & Philosophy of Science, UNSW January 2007 (has links)
This thesis is concerned with the computational modelling and simulation of physiological structures and cognitive functions of brains through the use of artificial neural nets. While the structures of these models are loosely related to neurons and physiological structures observed in brains, the extent to which we can accept claims about how neurons and brains really function based on such models depends largely on judgments about the fitness of (virtual) computer experiments as empirical evidence. The thesis examines the computational foundations of neural models, neural nets, and some computational models of higher cognitive functions in terms of their ability to provide empirical support for theories within the framework of Parallel Distributed Processing (PDP). Models of higher cognitive functions in this framework are often presented in forms that hybridise top-down (e.g. employing terminology from Psychology or Linguistics) and bottom-up (neurons and neural circuits) approaches to cognition. In this thesis I argue that the use of terminology from either approach can blind us to the highly theory-laden nature of the models, and that this tends to produce overly optimistic evaluations of the empirical value of computer experiments on these models. I argue, further, that some classes of computational models and simulations based on methodologies that hybridise top-down and bottom-up approaches are ill-designed. Consequently, many of the theoretical claims based on these models cannot be supported by experiments with such models. As a result, I question the effectiveness of computer experiments with artificial neural nets as an empirical technique for cognitive modelling.
3

Optimisation strategies in diffusion tensor MR imaging /

Skare, Stefan, January 2002 (has links)
Diss. (sammanfattning) Stockholm : Karol. inst., 2002. / Härtill 5 uppsatser.
4

Contributions to neuromorphic and reconfigurable circuits and systems

Nease, Stephen Howard 08 July 2011 (has links)
This thesis presents a body of work in the field of reconfigurable and neuromorphic circuits and systems. Three main projects were undertaken. The first was using a Field-Programmable Analog Array (FPAA) to model the cable behavior of dendrites using analog circuits. The second was to design, lay out, and test part of a new FPAA, the RASP 2.9v. The final project was to use floating-gate programming to remove offsets in a neuromorphic FPAA, the RASP Neuron 1D.
5

Hippocampal function and spatial information processing : computational and neural analyses

Hetherington, Phil A. (Phillip Alan) January 1995 (has links)
The hippocampus is necessary for normal memory in rodents, birds, monkeys, and people. Damage to the hippocampus can result in the inability to learn new facts, defined by the relationship among stimuli. In rodents, spatial learning involves learning about the relationships among stimuli, and exemplifies the kind of learning the requires the hippocampus. Therefore, understanding the neural mechanisms underlying spatial learning may elucidate basic memory processes. Many hippocampal neurons fire when behaving rats, cats, or monkeys are in circumscribed regions (place fields) of an environment. The neurons, called place cells, fire in relation to distal stimuli, but can persist in signaling location when the stimuli are removed or lights are turned off (memory fields). In this thesis, computational models of spatial information processing simulated many of the defining properties of hippocampal place cells, including memory fields. Furthermore, the models suggested a neurally plausible mechanism of goal directed spatial navigation which involved the encoding of distances in the connections between place cells. To navigate using memory fields, the models required an excitatory, distributed, and plastic association system among place cells. Such properties are well characterized in area CA3 of the hippocampus. In this thesis, a new electrophysiological study provides evidence that a second system in the dentate gyrus has similar properties. Thus, two circuits in the hippocampus meet the requirements of the models. Some predictions of the models were then tested in a single-unit recording experiment in behaving rats. Place fields were more likely to occur in information rich areas of the environment, and removal of single cues altered place fields in a way consistent with the distance encoding mechanism suggested by the models. It was concluded that a distance encoding theory of rat spatial navigation has much descriptive and predictive utility, but most of its predic
6

Hippocampal function and spatial information processing : computational and neural analyses

Hetherington, Phil A. (Phillip Alan) January 1995 (has links)
No description available.
7

Improving associative memory in a network of spiking neurons

Hunter, Russell I. January 2011 (has links)
In this thesis we use computational neural network models to examine the dynamics and functionality of the CA3 region of the mammalian hippocampus. The emphasis of the project is to investigate how the dynamic control structures provided by inhibitory circuitry and cellular modification may effect the CA3 region during the recall of previously stored information. The CA3 region is commonly thought to work as a recurrent auto-associative neural network due to the neurophysiological characteristics found, such as, recurrent collaterals, strong and sparse synapses from external inputs and plasticity between coactive cells. Associative memory models have been developed using various configurations of mathematical artificial neural networks which were first developed over 40 years ago. Within these models we can store information via changes in the strength of connections between simplified model neurons (two-state). These memories can be recalled when a cue (noisy or partial) is instantiated upon the net. The type of information they can store is quite limited due to restrictions caused by the simplicity of the hard-limiting nodes which are commonly associated with a binary activation threshold. We build a much more biologically plausible model with complex spiking cell models and with realistic synaptic properties between cells. This model is based upon some of the many details we now know of the neuronal circuitry of the CA3 region. We implemented the model in computer software using Neuron and Matlab and tested it by running simulations of storage and recall in the network. By building this model we gain new insights into how different types of neurons, and the complex circuits they form, actually work. The mammalian brain consists of complex resistive-capacative electrical circuitry which is formed by the interconnection of large numbers of neurons. A principal cell type is the pyramidal cell within the cortex, which is the main information processor in our neural networks. Pyramidal cells are surrounded by diverse populations of interneurons which have proportionally smaller numbers compared to the pyramidal cells and these form connections with pyramidal cells and other inhibitory cells. By building detailed computational models of recurrent neural circuitry we explore how these microcircuits of interneurons control the flow of information through pyramidal cells and regulate the efficacy of the network. We also explore the effect of cellular modification due to neuronal activity and the effect of incorporating spatially dependent connectivity on the network during recall of previously stored information. In particular we implement a spiking neural network proposed by Sommer and Wennekers (2001). We consider methods for improving associative memory recall using methods inspired by the work by Graham and Willshaw (1995) where they apply mathematical transforms to an artificial neural network to improve the recall quality within the network. The networks tested contain either 100 or 1000 pyramidal cells with 10% connectivity applied and a partial cue instantiated, and with a global pseudo-inhibition.We investigate three methods. Firstly, applying localised disynaptic inhibition which will proportionalise the excitatory post synaptic potentials and provide a fast acting reversal potential which should help to reduce the variability in signal propagation between cells and provide further inhibition to help synchronise the network activity. Secondly, implementing a persistent sodium channel to the cell body which will act to non-linearise the activation threshold where after a given membrane potential the amplitude of the excitatory postsynaptic potential (EPSP) is boosted to push cells which receive slightly more excitation (most likely high units) over the firing threshold. Finally, implementing spatial characteristics of the dendritic tree will allow a greater probability of a modified synapse existing after 10% random connectivity has been applied throughout the network. We apply spatial characteristics by scaling the conductance weights of excitatory synapses which simulate the loss in potential in synapses found in the outer dendritic regions due to increased resistance. To further increase the biological plausibility of the network we remove the pseudo-inhibition and apply realistic basket cell models with differing configurations for a global inhibitory circuit. The networks are configured with; 1 single basket cell providing feedback inhibition, 10% basket cells providing feedback inhibition where 10 pyramidal cells connect to each basket cell and finally, 100% basket cells providing feedback inhibition. These networks are compared and contrasted for efficacy on recall quality and the effect on the network behaviour. We have found promising results from applying biologically plausible recall strategies and network configurations which suggests the role of inhibition and cellular dynamics are pivotal in learning and memory.

Page generated in 0.1176 seconds