Spelling suggestions: "subject:"beural computers."" "subject:"aneural computers.""
21 |
Techniques for FPGA neural modelingWeinstein, Randall Kenneth 21 November 2006 (has links)
Neural simulations and general dynamical system modeling consistently push the limits of available computational horsepower. This is occurring for a number of reasons: 1) models are progressing in complexity as our biological understanding increases, 2) high-level analysis tools including parameter searches and sensitivity analyses are becoming more prevalent, and 3) computational models are increasingly utilized alongside with biological preparations in a dynamic clamp configuration. General-purpose computers, as the primary target for modeling problems, are the simplest platform to implement models due to the rich variety of available tools. However, computers, limited by their generality, perform sub-optimally relative to custom hardware solutions. The goal of this thesis is to develop a new cost-effective and easy-to-use platform delivering orders of magnitude improvement in throughput over personal computers.
We suggest that FPGAs, or field programmable gate arrays, provide an outlet for dramatically enhanced performance. FPGAs are high-speed, reconfigurable devices that can implement any digital logic operation using an array of parallel computing elements. Already common in fields such as signal processing, radar, medical imaging, and consumer electronics, FPGAs have yet to gain traction in neural modeling due to their steep learning curve and lack of sufficient tools despite their high-performance capability. The overall objective of this work has been to overcome the shortfalls of FPGAs to enable adoption of FPGAs within the neural modeling community.
We embarked on an incremental process to develop an FPGA-based modeling environment. We first developed a prototype multi-compartment motoneuron model using a standard digital-design methodology. FPGAs at this point were shown to exceed software simulations by 10x to 100x. Next, we developed canonical modeling methodologies for manual generation of typical neural model topologies. We then developed a series of tools and techniques for analog interfacing, digital protocol processing, and real-time model tuning. This thesis culminates with the development of Dynamo, a fully-automated model compiler for the direct conversion of a model description into an FPGA implementation.
|
22 |
Toward the neurocomputer goal-directed learning in embodied cultured networks/Chao, Zenas C. January 2007 (has links)
Thesis (Ph.D)--Biomedical Engineering, Georgia Institute of Technology, 2008. / Committee Chair: Potter, Steve; Committee Member: Butera, Robert; Committee Member: DeMarse, Thomas; Committee Member: Jaeger, Dieter; Committee Member: Lee, Robert.
|
23 |
Techniques for FPGA neural modelingWeinstein, Randall Kenneth. January 2006 (has links)
Thesis (Ph.D)--Bioengineering, Georgia Institute of Technology, 2007. / Committee Chair: Lee, Robert; Committee Member: Butera, Robert; Committee Member: DeWeerth, Steve; Committee Member: Madisetti, Vijay; Committee Member: Voit, Eberhard. Part of the SMARTech Electronic Thesis and Dissertation Collection.
|
24 |
Concussion in contact sport: investigating the neurocognitive profile of Afrikaans adolescent rugby playersHorsman, Mark January 2010 (has links)
A number of computerised tests have been especially developed to facilitate the medical management of the sports-related concussion. Probably the most widely used of these programmes is the ImPACT test that was developed in the USA and that is registered with the HPCSA for use in the South African context. A recent Afrikaans version of the test served as the basis of the present study with the following objectives: (i) to collect Afrikaans ImPACT normative data on a cohort of Afrikaans first language adolescent rugby players with Model C education for comparison with existing South African English first language adolescent rugby players with Private/Model C schooling, and (ii) to investigate the pre-versus postseason ImPACT neurocognitive test profiles of this cohort of Afrikaans first language adolescent rugby players versus equivalent noncontact sports controls. The results for Part 1 of the study generally demonstrate poorer performance in respect of the Afrikaans cohort, which is understood to be the result of poorer quality of education. The results for Part 2 demonstrated failure of the rugby group to benefit from practice on the ImPACT Visual Motor Speed composite score to the same extent as the control group. It is argued that this apparent cognitive vulnerability in the rugby group is due to lowered cognitive reserve capacity in association with long term exposure to concussive and sub-concussive injury.
|
25 |
Hardware implementation of the complex Hopfield neural networkCheng, Chih Kang 01 January 1995 (has links)
No description available.
|
26 |
An FPGA Implementation of a High Performance AER Packet NetworkMunipalli, Sirish Kumar 26 March 2013 (has links)
This thesis presents a design to route the spikes in a cognitive computing project called Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE). SyNAPSE is a DARPA-funded program to develop electronic neuromorphic ma- chine technology that scales to biological levels. The basic computational block in the SyNAPSE system is the asynchronous spike processor (ASP) chip. This analog core contains the neurons and synapses in a neural fabric and performs the neural and synaptic computations.An ASP takes asynchronous pulses (spikes) as inputs and after some small delay produces asyn- chronous pulses as outputs.The ASP chips are organized in a nxn (where n [approximately equal to] 10) 2-dimensional grid with a dedicated node for each chip. This interconnected network is called Digital Fabric(DF) and the node is called Digital Fabric Node (DFN). The DF is a packet network that routes pulse (AER - Address event rep- resentation) packets between ASP's. This thesis also presents a technique for design implementation on a FPGA, perfor- mance testing of the network and validation of the network using various tools.
|
27 |
Development and VLSI implementation of a new neural net generation methodBittner, Ray Albert 04 December 2009 (has links)
The author begins with a short introduction to current neural network practices and pitfalls including an in depth discussion of the meaning behind the equations. Specifically, a description of the underlying processes involved is given which likens training to the biological process of cell differentiation. Building on these ideas, an improved method of generating integer based binary neural networks is developed. This type of network is particularly useful for the optical character recognition problem, but methods for usage in the more general case are discussed. The new method does not use training as such. Rather, the training data is analyzed to determine the statistically significant relationships therein. These relationships are used to generate a neural network structure that is an idealization of the trained version in that it can accurately extrapolate from existing knowledge by exploiting known relationships in the training data.
The paper then turns to the design and testing of a VLSI CMOS chip which was created to utilize the new technique. The chip is based on the MOSIS 2Jlm process using a 2200A x 2200A die that was shaped into a special purpose microprocessor that could be used in any of a number of pattern recognition applications with low power requirements and/or limiting considerations. Simulation results of the methods are then given in which it is shown that error rates of less than 5% for inputs containing up to 30% noise can easily be achieved. Finally, the thesis concludes with ideas on how the various methods described might be improved further. / Master of Science
|
28 |
Inductive machine learning bias in knowledge-based neurocomputingSnyders, Sean 04 1900 (has links)
Thesis (MSc) -- Stellenbosch University , 2003. / ENGLISH ABSTRACT: The integration of symbolic knowledge with artificial neural networks is becoming an
increasingly popular paradigm for solving real-world problems. This paradigm named
knowledge-based neurocomputing, provides means for using prior knowledge to determine
the network architecture, to program a subset of weights to induce a learning bias
which guides network training, and to extract refined knowledge from trained neural
networks. The role of neural networks then becomes that of knowledge refinement. It
thus provides a methodology for dealing with uncertainty in the initial domain theory.
In this thesis, we address several advantages of this paradigm and propose a solution
for the open question of determining the strength of this learning, or inductive, bias.
We develop a heuristic for determining the strength of the inductive bias that takes the
network architecture, the prior knowledge, the learning method, and the training data
into consideration.
We apply this heuristic to well-known synthetic problems as well as published difficult
real-world problems in the domain of molecular biology and medical diagnoses. We
found that, not only do the networks trained with this adaptive inductive bias show
superior performance over networks trained with the standard method of determining
the strength of the inductive bias, but that the extracted refined knowledge from these
trained networks deliver more concise and accurate domain theories. / AFRIKAANSE OPSOMMING: Die integrasie van simboliese kennis met kunsmatige neurale netwerke word 'n toenemende
gewilde paradigma om reelewereldse probleme op te los. Hierdie paradigma
genoem, kennis-gebaseerde neurokomputasie, verskaf die vermoe om vooraf kennis te
gebruik om die netwerkargitektuur te bepaal, om a subversameling van gewigte te
programeer om 'n leersydigheid te induseer wat netwerkopleiding lei, en om verfynde
kennis van geleerde netwerke te kan ontsluit. Die rol van neurale netwerke word dan die
van kennisverfyning. Dit verskaf dus 'n metodologie vir die behandeling van onsekerheid
in die aanvangsdomeinteorie.
In hierdie tesis adresseer ons verskeie voordele wat bevat is in hierdie paradigma en stel
ons 'n oplossing voor vir die oop vraag om die gewig van hierdie leer-, of induktiewe
sydigheid te bepaal. Ons ontwikkel 'n heuristiek vir die bepaling van die induktiewe
sydigheid wat die netwerkargitektuur, die aanvangskennis, die leermetode, en die data
vir die leer proses in ag neem.
Ons pas hierdie heuristiek toe op bekende sintetiese probleme so weI as op gepubliseerde
moeilike reelewereldse probleme in die gebied van molekulere biologie en mediese diagnostiek.
Ons bevind dat, nie alleenlik vertoon die netwerke wat geleer is met die
adaptiewe induktiewe sydigheid superieure verrigting bo die netwerke wat geleer is met
die standaardmetode om die gewig van die induktiewe sydigheid te bepaal nie, maar
ook dat die verfynde kennis wat ontsluit is uit hierdie geleerde netwerke meer bondige
en akkurate domeinteorie lewer.
|
29 |
Learning in silicon: a floating-gate based, biophysically inspired, neuromorphic hardware system with synaptic plasticityBrink, Stephen Isaac 24 August 2012 (has links)
The goal of neuromorphic engineering is to create electronic systems that model the behavior of biological neural systems. Neuromorphic systems can leverage a combination of analog and digital circuit design techniques to enable computational modeling, with orders of magnitude of reduction in size, weight, and power consumption compared to the traditional modeling approach based upon numerical integration. These benefits of neuromorphic modeling have the potential to facilitate neural modeling in resource-constrained research environments. Moreover, they will make it practical to use neural computation in the design of intelligent machines, including portable, battery-powered, and energy harvesting applications. Floating-gate transistor technology is a powerful tool for neuromorphic engineering because it allows dense implementation of synapses with nonvolatile storage of synaptic weights, cancellation of process mismatch, and reconfigurable system design. A novel neuromorphic hardware system, featuring compact and efficient channel-based model neurons and floating-gate transistor synapses, was developed. This system was used to model a variety of network topologies with up to 100 neurons. The networks were shown to possess computational capabilities such as spatio-temporal pattern generation and recognition, winner-take-all competition, bistable activity implementing a "volatile memory", and wavefront-based robotic path planning. Some canonical features of synaptic plasticity, such as potentiation of high frequency inputs and potentiation of correlated inputs in the presence of uncorrelated noise, were demonstrated. Preliminary results regarding formation of receptive fields were obtained. Several advances in enabling technologies, including methods for floating-gate transistor array programming, and the creation of a reconfigurable system for studying adaptation in floating-gate transistor circuits, were made.
|
30 |
Toward a brain-like memory with recurrent neural networksSalihoglu, Utku 12 November 2009 (has links)
For the last twenty years, several assumptions have been expressed in the fields of information processing, neurophysiology and cognitive sciences. First, neural networks and their dynamical behaviors in terms of attractors is the natural way adopted by the brain to encode information. Any information item to be stored in the neural network should be coded in some way or another in one of the dynamical attractors of the brain, and retrieved by stimulating the network to trap its dynamics in the desired item’s basin of attraction. The second view shared by neural network researchers is to base the learning of the synaptic matrix on a local Hebbian mechanism. The third assumption is the presence of chaos and the benefit gained by its presence. Chaos, although very simply produced, inherently possesses an infinite amount of cyclic regimes that can be exploited for coding information. Moreover, the network randomly wanders around these unstable regimes in a spontaneous way, thus rapidly proposing alternative responses to external stimuli, and being easily able to switch from one of these potential attractors to another in response to any incoming stimulus. Finally, since their introduction sixty years ago, cell assemblies have proved to be a powerful paradigm for brain information processing. After their introduction in artificial intelligence, cell assemblies became commonly used in computational neuroscience as a neural substrate for content addressable memories. <p> <p>Based on these assumptions, this thesis provides a computer model of neural network simulation of a brain-like memory. It first shows experimentally that the more information is to be stored in robust cyclic attractors, the more chaos appears as a regime in the background, erratically itinerating among brief appearances of these attractors. Chaos does not appear to be the cause, but the consequence of the learning. However, it appears as an helpful consequence that widens the network’s encoding capacity. To learn the information to be stored, two supervised iterative Hebbian learning algorithm are proposed. One leaves the semantics of the attractors to be associated with the feeding data unprescribed, while the other defines it a priori. Both algorithms show good results, even though the first one is more robust and has a greater storing capacity. Using these promising results, a biologically plausible alternative to these algorithms is proposed using cell assemblies as substrate for information. Even though this is not new, the mechanisms underlying their formation are poorly understood and, so far, there are no biologically plausible algorithms that can explain how external stimuli can be online stored in cell assemblies. This thesis provide such a solution combining a fast Hebbian/anti-Hebbian learning of the network's recurrent connections for the creation of new cell assemblies, and a slower feedback signal which stabilizes the cell assemblies by learning the feed forward input connections. This last mechanism is inspired by the retroaxonal hypothesis. <p> / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
|
Page generated in 0.1004 seconds