• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • Tagged with
  • 5
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A new neurocomputing approach: Software and hardware designs

Akbari, Kazem January 1995 (has links)
No description available.
2

Neurocomputing and Associative Memories Based on Emerging Technologies: Co-optimization of Technology and Architecture

Calayir, Vehbi 01 September 2014 (has links)
Neurocomputers offer a massively parallel computing paradigm by mimicking the human brain. Their efficient use in statistical information processing has been proposed to overcome critical bottlenecks with traditional computing schemes for applications such as image and speech processing, and associative memory. In neural networks information is generally represented by phase (e.g., oscillatory neural networks) or amplitude (e.g., cellular neural networks). Phase-based neurocomputing is constructed as a network of coupled oscillatory neurons that are connected via programmable phase elements. Representing each neuron circuit with one oscillatory device and implementing programmable phases among neighboring neurons, however, are not clearly feasible from circuits perspective if not impossible. In contrast to nascent oscillatory neurocomputing circuits, mature amplitude-based neural networks offer more efficient circuit solutions using simpler resistive networks where information is carried via voltage- and current-mode signals. Yet, such circuits have not been efficiently realized by CMOS alone due to the needs for an efficient summing mechanism for weighted neural signals and a digitally-controlled weighting element for representing couplings among artificial neurons. Large power consumption and high circuit complexity of such CMOS-based implementations have precluded adoption of amplitude-based neurocomputing circuits as well, and have led researchers to explore the use of emerging technologies for such circuits. Although they provide intriguing properties, previously proposed neurocomputing components based on emerging technologies have not offered a complete and practical solution to efficiently construct an entire system. In this thesis we explore the generalized problem of co-optimization of technology and architecture for such systems, and develop a recipe for device requirements and target capabilities. We describe four plausible technologies, each of which could potentially enable the implementation of an efficient and fully-functional neurocomputing system. We first investigate fully-digital neural network architectures that have been tried before using CMOS technology in which many large-size logic gates such as D flip-flops and look-up tables are required. Using a newly-proposed all-magnetic non-volatile logic family, mLogic, we demonstrate the efficacy of digitizing the oscillators and phase relationships for an oscillatory neural network by exploiting the inherent storage as well as enabling an all-digital cellular neural network hardware with simplified programmability. We perform system-level comparisons of mLogic and 32nm CMOS for both networks consisting of 60 neurons. Although digital implementations based on mLogic offer improvements over CMOS in terms of power and area, analog neurocomputing architectures seem to be more compatible with the greatest portion of emerging technologies and devices. For this purpose in this dissertation we explore several emerging technologies with unique device configurations and features such as mCell devices, ovenized aluminum nitride resonators, and tunable multi-gate graphene devices to efficiently enable two key components required for such analog networks – that is, summing function and weighting with compact D/A (digital-to-analog) conversion capability. We demonstrate novel ways to implement these functions and elaborate on our building blocks for artificial neurons and synapses using each technology. We verify the functionality of each proposed implementation using various image processing applications based on compact circuit simulation models for such post-CMOS devices. Finally, we design a proof-of-concept neurocomputing circuitry containing 20 neurons using 65nm CMOS technology that is based on the primitives that we define for our analog neurocomputing scheme. This allows us to fully recognize the inefficiencies of an all-CMOS implementation for such specific applications. We share our experimental results that are in agreement with circuit simulations for the same image processing applications based on proposed architectures using emerging technologies. Power and area comparisons demonstrate significant improvements for analog neurocomputing circuits when implemented using beyond- CMOS technologies, thereby promising huge opportunities for future energy-efficient computing.
3

Inductive machine learning bias in knowledge-based neurocomputing

Snyders, Sean 04 1900 (has links)
Thesis (MSc) -- Stellenbosch University , 2003. / ENGLISH ABSTRACT: The integration of symbolic knowledge with artificial neural networks is becoming an increasingly popular paradigm for solving real-world problems. This paradigm named knowledge-based neurocomputing, provides means for using prior knowledge to determine the network architecture, to program a subset of weights to induce a learning bias which guides network training, and to extract refined knowledge from trained neural networks. The role of neural networks then becomes that of knowledge refinement. It thus provides a methodology for dealing with uncertainty in the initial domain theory. In this thesis, we address several advantages of this paradigm and propose a solution for the open question of determining the strength of this learning, or inductive, bias. We develop a heuristic for determining the strength of the inductive bias that takes the network architecture, the prior knowledge, the learning method, and the training data into consideration. We apply this heuristic to well-known synthetic problems as well as published difficult real-world problems in the domain of molecular biology and medical diagnoses. We found that, not only do the networks trained with this adaptive inductive bias show superior performance over networks trained with the standard method of determining the strength of the inductive bias, but that the extracted refined knowledge from these trained networks deliver more concise and accurate domain theories. / AFRIKAANSE OPSOMMING: Die integrasie van simboliese kennis met kunsmatige neurale netwerke word 'n toenemende gewilde paradigma om reelewereldse probleme op te los. Hierdie paradigma genoem, kennis-gebaseerde neurokomputasie, verskaf die vermoe om vooraf kennis te gebruik om die netwerkargitektuur te bepaal, om a subversameling van gewigte te programeer om 'n leersydigheid te induseer wat netwerkopleiding lei, en om verfynde kennis van geleerde netwerke te kan ontsluit. Die rol van neurale netwerke word dan die van kennisverfyning. Dit verskaf dus 'n metodologie vir die behandeling van onsekerheid in die aanvangsdomeinteorie. In hierdie tesis adresseer ons verskeie voordele wat bevat is in hierdie paradigma en stel ons 'n oplossing voor vir die oop vraag om die gewig van hierdie leer-, of induktiewe sydigheid te bepaal. Ons ontwikkel 'n heuristiek vir die bepaling van die induktiewe sydigheid wat die netwerkargitektuur, die aanvangskennis, die leermetode, en die data vir die leer proses in ag neem. Ons pas hierdie heuristiek toe op bekende sintetiese probleme so weI as op gepubliseerde moeilike reelewereldse probleme in die gebied van molekulere biologie en mediese diagnostiek. Ons bevind dat, nie alleenlik vertoon die netwerke wat geleer is met die adaptiewe induktiewe sydigheid superieure verrigting bo die netwerke wat geleer is met die standaardmetode om die gewig van die induktiewe sydigheid te bepaal nie, maar ook dat die verfynde kennis wat ontsluit is uit hierdie geleerde netwerke meer bondige en akkurate domeinteorie lewer.
4

Computação por assembleias neurais em redes neurais pulsadas. / Computing with neural assemblies in spiking neural networks.

João Henrique Ranhel Ribeiro 05 December 2011 (has links)
Um dos grandes mistérios da ciência é compreender como sistemas nervosos são capazes de realizar as extraordinárias operações computacionais que realizam. Provavelmente, encéfalos são as estruturas nas quais energia e matéria estão organizadas da forma mais complexa no universo. Central na computação cerebral está o conceito de neurônio. A forma como neurônios computam é motivo de intensa investigação científica. Um consenso atual é que neurônios formam grupos transientes (assembleias) a fim de representar coisas, de realizar operações computacionais, e de executar processos cognitivos; embora os mecanismos que fundamentam a computação por assembleias ainda não seja bem compreendido. Aqui é proposta uma forma pela qual se explica como computação por assembleias pode acontecer. Dois componentes são fundamentais para formação de coalizões neurais: a relação temporal entre grupos de neurônios e o fator de acoplamento entre eles. Assembleias pressupõe neurônios pulsantes; portanto, simulamos computação por assembleias em redes neurais pulsantes. A abordagem usada nesta tese é funcional; apresentamos um arcabouço teórico sobre propriedades, princípios, e dinâmicas que permitem operações computacionais por coalizões neurais. É apresentado na tese que: (i) quando neurônios formam assembleias está implícito que um tipo de função lógica estocástica ocorre, (ii) assembleias podem formar grupos com feedback, criando grupos biestáveis, (iii) grupos biestáveis criam representações internas dos eventos que os criaram, (iv) assembleias podem se ramificar e também dissolver outras assembleias, o que dá origem a algoritmos complexos. Esta é uma investigação inicial sobre computação em assembleias neurais, e há muito a ser feito. Nesta tese apresentamos os conceitos basais para esta nova abordagem. Há um conjunto de programas nos apêndices que permitem ao leitor simular formações de assembleias, ramificações, inibições, reverberações, entre outras propriedades e componentes de nossa proposta. / One of the greatest mysteries in science is to comprehend how brains are capable of realizing the extraordinary computational operations they do. Probably, brains are the structures in which matter and energy are organized in the most complex way in the Universe. Central to the brain computation is the concept of neuron. How neurons compute is motive of intensive scientific investigation. A prevailing consensus is that neurons form transient groups (assemblies) in order to represent things, for realizing computational operations, and for executing cognitive processes; although the mechanisms that substantiate such computation by neural assemblies are not yet well understood. In this thesis we propose a form that explains how neural assembly computation may occur. It is shown that two components are fundamentals for neural coalition formation: the temporal relation among neural groups, and the coupling factor among them. In this sense, neural assemblies presuppose spiking neurons; therefore, here we simulate assembly computing using spiking neural networks. In this thesis it is presented basically a functional approach; thus, it presents a theoretical approach concerning the properties, principles, characteristics, and components that allow the computational operations in neural coalitions. It is presented in the thesis that: (i) as neurons form assemblies it is implicit that a kind of stochastic logic function occurs; (ii) assemblies may form groups that feedback each other, creating bistable groups; (iii) bistable groups internally represent the event that created them; (iv) assemblies may branch and dissolve other assemblies, what give rise to complex algorithms. This is an initial investigation about neural assembly computing and there is a lot to be done; however, in this thesis we present the basal concepts for this new approach. There are programs in the appendices that allow the reader to simulate assembly formation, branching, inhibition, reverberation, among other properties and components in our proposal.
5

Computação por assembleias neurais em redes neurais pulsadas. / Computing with neural assemblies in spiking neural networks.

Ribeiro, João Henrique Ranhel 05 December 2011 (has links)
Um dos grandes mistérios da ciência é compreender como sistemas nervosos são capazes de realizar as extraordinárias operações computacionais que realizam. Provavelmente, encéfalos são as estruturas nas quais energia e matéria estão organizadas da forma mais complexa no universo. Central na computação cerebral está o conceito de neurônio. A forma como neurônios computam é motivo de intensa investigação científica. Um consenso atual é que neurônios formam grupos transientes (assembleias) a fim de representar coisas, de realizar operações computacionais, e de executar processos cognitivos; embora os mecanismos que fundamentam a computação por assembleias ainda não seja bem compreendido. Aqui é proposta uma forma pela qual se explica como computação por assembleias pode acontecer. Dois componentes são fundamentais para formação de coalizões neurais: a relação temporal entre grupos de neurônios e o fator de acoplamento entre eles. Assembleias pressupõe neurônios pulsantes; portanto, simulamos computação por assembleias em redes neurais pulsantes. A abordagem usada nesta tese é funcional; apresentamos um arcabouço teórico sobre propriedades, princípios, e dinâmicas que permitem operações computacionais por coalizões neurais. É apresentado na tese que: (i) quando neurônios formam assembleias está implícito que um tipo de função lógica estocástica ocorre, (ii) assembleias podem formar grupos com feedback, criando grupos biestáveis, (iii) grupos biestáveis criam representações internas dos eventos que os criaram, (iv) assembleias podem se ramificar e também dissolver outras assembleias, o que dá origem a algoritmos complexos. Esta é uma investigação inicial sobre computação em assembleias neurais, e há muito a ser feito. Nesta tese apresentamos os conceitos basais para esta nova abordagem. Há um conjunto de programas nos apêndices que permitem ao leitor simular formações de assembleias, ramificações, inibições, reverberações, entre outras propriedades e componentes de nossa proposta. / One of the greatest mysteries in science is to comprehend how brains are capable of realizing the extraordinary computational operations they do. Probably, brains are the structures in which matter and energy are organized in the most complex way in the Universe. Central to the brain computation is the concept of neuron. How neurons compute is motive of intensive scientific investigation. A prevailing consensus is that neurons form transient groups (assemblies) in order to represent things, for realizing computational operations, and for executing cognitive processes; although the mechanisms that substantiate such computation by neural assemblies are not yet well understood. In this thesis we propose a form that explains how neural assembly computation may occur. It is shown that two components are fundamentals for neural coalition formation: the temporal relation among neural groups, and the coupling factor among them. In this sense, neural assemblies presuppose spiking neurons; therefore, here we simulate assembly computing using spiking neural networks. In this thesis it is presented basically a functional approach; thus, it presents a theoretical approach concerning the properties, principles, characteristics, and components that allow the computational operations in neural coalitions. It is presented in the thesis that: (i) as neurons form assemblies it is implicit that a kind of stochastic logic function occurs; (ii) assemblies may form groups that feedback each other, creating bistable groups; (iii) bistable groups internally represent the event that created them; (iv) assemblies may branch and dissolve other assemblies, what give rise to complex algorithms. This is an initial investigation about neural assembly computing and there is a lot to be done; however, in this thesis we present the basal concepts for this new approach. There are programs in the appendices that allow the reader to simulate assembly formation, branching, inhibition, reverberation, among other properties and components in our proposal.

Page generated in 0.2544 seconds