• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 97
  • 12
  • 6
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 172
  • 124
  • 77
  • 49
  • 45
  • 41
  • 37
  • 36
  • 30
  • 28
  • 21
  • 21
  • 19
  • 19
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Design and Optimization of Temporal Encoders using Integrate-and-Fire and Leaky Integrate-and-Fire Neurons

Anderson, Juliet Graciela 05 October 2022 (has links)
As Moore's law nears its limit, a new form of signal processing is needed. Neuromorphic computing has used inspiration from biology to produce a new form of signal processing by mimicking biological neural networks using electrical components. Neuromorphic computing requires less signal preprocessing than digital systems since it can encode signals directly using analog temporal encoders from Spiking Neural Networks (SNNs). These encoders receive an analog signal as an input and generate a spike or spike trains as their output. The proposed temporal encoders use latency and Inter-Spike Interval (ISI) encoding and are expected to produce a highly sensitive hardware implementation of time encoding to preprocess signals for dynamic neural processors. Two ISI and two latency encoders were designed using Integrate-and-Fire (IF) and Leaky Integrate-and-Fire (LIF) neurons and optimized to produce low area designs. The IF and LIF neurons were designed using the Global Foundries 180nm CMOS process and achieved an area of 186µm2 and 182µm2, respectively. All four encoders have a sampling frequency of 50kHz. The latency encoders achieved an average energy consumption per spike of 277nJ and 316pJ for the IF-based and LIF-based latency encoders, respectively. The ISI encoders achieved an average energy consumption per spike of 1.07uJ and 901nJ for the IF-based and LIF-based ISI encoders, respectively. Power consumption is proportional to the number of neurons employed in the encoder and the potential to reduce power consumption through layout-level simulations is presented. The LIF neuron is able to use a smaller membrane capacitance to achieve similar operability as the IF neuron and consumes less area despite having more components. This demonstrates that capacitor sizes are the main limitations of a small size in spiking neurons for SNNs. An overview of the design and layout process of the two presented neurons is discussed with tips for overcoming problems encountered. The proposed designs can result in a fast neuromorphic process by employing a frequency higher than 10kHz and by providing a hardware implementation that is efficient in multiple sectors like machine learning, medical implementations, or security systems since hardware is safer from hacks. / Master of Science / As Moore's law nears its limit, a new form of signal processing is needed. Moore's law anticipated that transistor sizes will decrease exponentially as the years pass but CMOS technology is reaching physical limitations which could mean an end to Moore's prediction. Neuromorphic computing has used inspiration from biology to produce a new form of signal processing by mimicking biological neural networks using electrical components. Biological neural networks communicate through interconnected neurons that transmit signals through synapses. Neuromorphic computing uses a subdivision of Artificial Neural Networks (ANNs) called Spiking Neural Networks (SNNs) to encode input signals into voltage spikes to mimic biological neurons. Neuromorphic computing reduces the preprocessing step needed to process data in the digital domain since it can encode signals directly using analog temporal encoders from SNNs. These encoders receive an analog signal as an input and generate a spike or spike trains as their output. The proposed temporal encoders use latency and Inter-Spike Interval (ISI) encoding and are expected to produce a highly sensitive hardware implementation of time encoding to preprocess signals for dynamic neural processors. Two ISI and two latency encoders were designed using Integrate-and-Fire (IF) and Leaky Integrate-and-Fire (LIF) neurons and optimized to produce low area designs. All four encoders have a sampling frequency of 50kHz. The latency encoders achieved an average energy consumption per spike of 277nJ and 316pJ for the IF-based and LIF-based latency encoders, respectively. The ISI encoders achieved an average energy consumption per spike of 1.07uJ and 901nJ for the IF-based and LIF-based ISI encoders, respectively. Power consumption is proportional to the number of neurons employed in the encoder and the potential to reduce power consumption through layout-level simulations is presented. The LIF neuron is able to use a smaller membrane capacitance to achieve similar operability which consumes less area despite having more components than the IF neuron. This demonstrates that capacitor sizes are the main limitations of small size in neurons for spiking neural networks. An overview of the design and layout process of the two presented neurons is discussed with tips for overcoming problems encountered. The proposed designs can result in a fast neuromorphic process by employing a frequency higher than 10kHz and by providing a hardware implementation that is efficient in multiple sectors like machine learning, medical implementations, or security systems since hardware is safer from hacks.
92

Energy-efficient Neuromorphic Computing for Resource-constrained Internet of Things Devices

Liu, Shiya 03 November 2023 (has links)
Due to the limited computation and storage resources of Internet of Things (IoT) devices, many emerging intelligent applications based on deep learning techniques heavily depend on cloud computing for computation and storage. However, cloud computing faces technical issues with long latency, poor reliability, and weak privacy, resulting in the need for on-device computation and storage. Also, on-device computation is essential for many time-critical applications, which require real-time data processing and energy-efficient. Furthermore, the escalating requirements for on-device processing are driven by network bandwidth limitations and consumer anticipations concerning data privacy and user experience. In the realm of computing, there is a growing interest in exploring novel technologies that can facilitate ongoing advancements in performance. Of the various prospective avenues, the field of neuromorphic computing has garnered significant recognition as a crucial means to achieve fast and energy-efficient machine intelligence applications for IoT devices. The programming of neuromorphic computing hardware typically involves the construction of a spiking neural network (SNN) capable of being deployed onto the designated neuromorphic hardware. This dissertation presents a range of methodologies aimed at enhancing the precision and energy efficiency of SNNs. To be more precise, these advancements are achieved by incorporating four essential methods. The first method is the quantization of neural networks through knowledge distillation. This work introduces a quantization technique that effectively reduces the computational and storage resource requirements of a model while minimizing the loss of accuracy. To further enhance the reduction of quantization errors, the second method introduces a novel quantization-aware training algorithm specifically designed for training quantized spiking neural network (SNN) models intended for execution on the Loihi chip, a specialized neuromorphic computing chip. SNNs generally exhibit lower accuracy performance compared to deep neural networks (DNNs). The third approach introduces a DNN-SNN co-learning algorithm, which enhances the performance of SNN models by leveraging knowledge obtained from DNN models. The design of the neural architecture plays a vital role in enhancing the accuracy and energy efficiency of an SNN model. The fourth method presents a novel neural architecture search algorithm specifically tailored for SNNs on the Loihi chip. The method selects an optimal architecture based on gradients induced by the architecture at initialization across different data samples without the need for training the architecture. To demonstrate the effectiveness and performance across diverse machine intelligence applications, our methods are evaluated through (i) image classification, (ii) spectrum sensing, and (iii) modulation symbol detection. / Doctor of Philosophy / In the emerging Internet of Things (IoT), our everyday devices, from smart home gadgets to wearables, can autonomously make intelligent decisions. However, due to their limited computing power and storage, many IoT devices heavily depend on cloud computing, which brings along issues like slow response times, privacy concerns, and unreliable connections. Neuromorphic computing is a recognized and crucial approach for achieving fast and energy-efficient machine intelligence applications in IoT devices. Inspired by the human brain's neural networks, this cutting-edge approach allows devices to perform complex tasks efficiently and in real-time. The programming of this neuromorphic hardware involves creating spiking neural networks (SNNs). This dissertation presents several innovative methods to improve the precision and energy efficiency of these SNNs. Firstly, a technique called "quantization" reduces the computational and storage requirements of models without sacrificing accuracy. Secondly, a unique training algorithm is designed to enhance the performance of SNN models. Thirdly, a clever co-learning algorithm allows SNN models to learn from traditional deep neural networks (DNNs), further improving their accuracy. Lastly, a novel neural architecture search algorithm finds the best architecture for SNNs on the designated neuromorphic chip, without the need for extensive training. By making IoT devices smarter and more efficient, neuromorphic computing brings us closer to a world where our gadgets can perform intelligent tasks independently, enhancing convenience and privacy for users across the globe.
93

Spiking neural P systems: matrix representation and formal verification

Gheorghe, Marian, Lefticaru, Raluca, Konur, Savas, Niculescu, I.M., Adorna, H.N. 28 April 2021 (has links)
Yes / Structural and behavioural properties of models are very important in development of complex systems and applications. In this paper, we investigate such properties for some classes of SN P systems. First, a class of SN P systems associated to a set of routing problems are investigated through their matrix representation. This allows to make certain connections amongst some of these problems. Secondly, the behavioural properties of these SN P systems are formally verified through a natural and direct mapping of these models into kP systems which are equipped with adequate formal verification methods and tools. Some examples are used to prove the effectiveness of the verification approach. / EPSRC research grant EP/R043787/1; DOST-ERDT research grants; Semirara Mining Corp; UPD-OVCRD;
94

Leveraging Biological Mechanisms in Machine Learning

Rogers, Kyle J. 10 June 2024 (has links) (PDF)
This thesis integrates biologically-inspired mechanisms into machine learning to develop novel tuning algorithms, gradient abstractions for depth-wise parallelism, and an original bias neuron design. We introduce neuromodulatory tuning, which uses neurotransmitter-inspired bias adjustments to enhance transfer learning in spiking and non-spiking neural networks, significantly reducing parameter usage while maintaining performance. Additionally, we propose a novel approach that decouples the backward pass of backpropagation using layer abstractions, inspired by feedback loops in biological systems, enabling depth-wise training parallelization. We further extend neuromodulatory tuning by designing spiking bias neurons that mimic dopamine neuron mechanisms, leading to the development of volumetric tuning. This method enhances the fine-tuning of a small spiking neural network for EEG emotion classification, outperforming previous bias tuning methods. Overall, this thesis demonstrates the potential of leveraging neuroscience discoveries to improve machine learning.
95

Neocortical Interneuron Subtypes Show an Altered Distribution in a Rat Model of Maldevelopment Associated With Epileptiform Activity

Hays, Kimberly Lynne 01 January 2007 (has links)
Cortical malformations as a result of altered development are a common cause of human epilepsy. The cellular mechanisms that render neurons of malformed cortex epileptogenic remain unclear. Using a rat model of the malformation of microgyria, a previous study showed an alteration in the number of immunocytochemically-identified parvalbumin cells, a GABAergic inhibitory interneurons subtype (Rosen et al., 1998). A second study showed no change in the total number of GABAergic neurons (Schwarz et al., 2000). Consequently, we hypothesize that interneuron subtypes are differentially affected by maldevelopment. The present study investigated (1) whether interneuron subtype identity is retained in malformed cortex, based on chemical content, and (2) whether the proportion of three chemical subtypes is altered in malformed cortex. Here we demonstrate that three non-overlapping subtype markers remain non-overlapping in malformed cortex, but show altered distributions. These findings suggest that an increase in one subpopulation of interneurons may compensate for a corresponding decrease in a second subset.
96

AN ORGANIC NEURAL CIRCUIT: TOWARDS FLEXIBLE AND BIOCOMPATIBLE ORGANIC NEUROMORPHIC PROCESSING

Mohammad Javad Mirshojaeian Hosseini (16700631) 31 July 2023 (has links)
<p>Neuromorphic computing endeavors to develop computational systems capable of emulating the brain’s capacity to execute intricate tasks concurrently and with remarkable energy efficiency. By utilizing new bioinspired computing architectures, these systems have the potential to revolutionize high-performance computing and enable local, low-energy computing for sensors and robots. Organic and soft materials are particularly attractive for neuromorphic computing as they offer biocompatibility, low-energy switching, and excellent tunability at a relatively low cost. Additionally, organic materials provide physical flexibility, large-area fabrication, and printability.</p><p>This doctoral dissertation showcases the research conducted in fabricating a comprehensive spiking organic neuron, which serves as the fundamental constituent of a circuit system for neuromorphic computing. The major contribution of this dissertation is the development of the organic, flexible neuron composed of spiking synapses and somas utilizing ultra-low voltage organic field-effect transistors (OFETs) for information processing. The synaptic and somatic circuits are implemented using physically flexible and biocompatible organic electronics necessary to realize the Polymer Neuromorphic Circuitry. An Axon-Hillock (AH) somatic circuit was fabricated and analyzed, followed by the adaptation of a log-domain integrator (LDI) synaptic circuit and the fabrication and analysis of a differential-pair integrator (DPI). Finally, a spiking organic neuron was formed by combining two LDI synaptic circuits and one AH synaptic circuit, and its characteristics were thoroughly examined. This is the first demonstration of the fabrication of an entire neuron using solid-state organic materials over a flexible substrate with integrated complementary OFETs and capacitors.</p>
97

Exploring Column Update Elimination Optimization for Spike-Timing-Dependent Plasticity Learning Rule / Utforskar kolumnuppdaterings-elimineringsoptimering för spik-timing-beroende plasticitetsinlärningsregel

Singh, Ojasvi January 2022 (has links)
Hebbian learning based neural network learning rules when implemented on hardware, store their synaptic weights in the form of a two-dimensional matrix. The storage of synaptic weights demands large memory bandwidth and storage. While memory units are optimized for only row-wise memory access, Hebbian learning rules, like the spike-timing dependent plasticity, demand both row and column-wise access of memory. This dual pattern of memory access accounts for the dominant cost in terms of latency as well as energy for realization of large scale spiking neural networks in hardware. In order to reduce the memory access cost in Hebbian learning rules, a Column Update Elimination optimization has been previously implemented, with great efficacy, on the Bayesian Confidence Propagation neural network, that faces a similar challenge of dual pattern memory access. This thesis explores the possibility of extending the column update elimination optimization to spike-timing dependent plasticity, by simulating the learning rule on a two layer network of leaky integrate-and-fire neurons on an image classification task. The spike times are recorded for each neuron in the network, to derive a suitable probability distribution function for spike rates per neuron. This is then used to derive an ideal postsynaptic spike history buffer size for the given algorithm. The associated memory access reductions are analysed based on data to assess feasibility of the optimization to the learning rule. / Hebbiansk inlärning baserat på neural nätverks inlärnings regler används vid implementering på hårdvara, de lagrar deras synaptiska vikter i form av en tvådimensionell matris. Lagringen av synaptiska vikter kräver stor bandbredds minne och lagring. Medan minnesenheter endast är optimerade för radvis minnesåtkomst. Hebbianska inlärnings regler kräver som spike-timing-beroende plasticitet, både rad- och kolumnvis åtkomst av minnet. Det dubbla mönstret av minnes åtkomsten står för den dominerande kostnaden i form av fördröjning såväl som energi för realiseringen av storskaliga spikande neurala nätverk i hårdvara. För att minska kostnaden för minnesåtkomst i hebbianska inlärnings regler har en Column Update Elimination-optimering tidigare implementerats, med god effektivitet på Bayesian Confidence Propagation neurala nätverket, som står inför en liknande utmaning med dubbel mönster minnesåtkomst. Denna avhandling undersöker möjligheten att utöka ColumnUpdate Elimination-optimeringen till spike-timing-beroende plasticitet. Detta genom att simulera inlärnings regeln på ett tvålagers nätverk av läckande integrera-och-avfyra neuroner på en bild klassificerings uppgift. Spike tiderna registreras för varje neuron i nätverket för att erhålla en lämplig sannolikhetsfördelning funktion för frekvensen av toppar per neuron. Detta används sedan för att erhålla en idealisk postsynaptisk spike historisk buffertstorlek för den angivna algoritmen. De associerade minnesåtkomst minskningarna analyseras baserat på data för att bedöma genomförbarheten av optimeringen av inlärnings regeln.
98

Reconhecimento de padrões usando uma rede neural pulsada inspirada no bulbo olfatório / Pattern Reconigtion Using Spiking Neuron Networks Inspired on Olfactory Bulb

Figueira, Lucas Baggio 31 August 2011 (has links)
O sistema olfatório é notável por sua capacidade de discriminar odores muito similares, mesmo que estejam misturados. Essa capacidade de discriminação é, em parte, devida a padrões de atividade espaço-temporais gerados nas células mitrais, as células principais do bulbo olfatório, durante a apresentação de um odor. Tais padrões dinâmicos decorrem de interações sinápticas recíprocas entre as células mitrais e interneurônios inibitórios do bulbo olfatório, por exemplo, as células granulares. Nesta tese, apresenta-se um modelo do bulbo olfatório baseado em modelos pulsados das células mitrais e granulares e avalia-se o seu desempenho como sistema reconhecedor de padrões usando-se bases de dados de padrões artificiais e reais. Os resultados dos testes mostram que o modelo possui a capacidade de separar padrões em diferentes classes. Essa capacidade pode ser explorada na construção de sistemas reconhecedores de padrões. Apresenta-se também a ferramenta denominada Nemos, desenvolvida para a implementação do modelo, que é uma plataforma para simulação de neurônios e redes de neurônios pulsados com interface gráfica amigável com o usuário. / The olfactory system is a remarkable system capable of discriminating very similar odorant mixtures. This is in part achieved via spatio-temporal activity patterns generated in mitral cells, the principal cells of the olfactory bulb, during odor presentation. Here, we present a spiking neural network model of the olfactory bulb and evaluate its performance as a pattern recognition system with datasets taken from both artificial and real pattern databases. Our results show that the dynamic activity patterns produced in the mitral cells of the olfactory bulb model by pattern attributes presented to it have a pattern separation capability. This capability can be explored in the construction of high-performance pattern recognition systems. Besides, we proposed Nemos a framework for simulation spiking neural networks through graphical user interface and has extensible models for neurons, synapses and networks.
99

Deep spiking neural networks

Liu, Qian January 2018 (has links)
Neuromorphic Engineering (NE) has led to the development of biologically-inspired computer architectures whose long-term goal is to approach the performance of the human brain in terms of energy efficiency and cognitive capabilities. Although there are a number of neuromorphic platforms available for large-scale Spiking Neural Network (SNN) simulations, the problem of programming these brain-like machines to be competent in cognitive applications still remains unsolved. On the other hand, Deep Learning has emerged in Artificial Neural Network (ANN) research to dominate state-of-the-art solutions for cognitive tasks. Thus the main research problem emerges of understanding how to operate and train biologically-plausible SNNs to close the gap in cognitive capabilities between SNNs and ANNs. SNNs can be trained by first training an equivalent ANN and then transferring the tuned weights to the SNN. This method is called ‘off-line’ training, since it does not take place on an SNN directly, but rather on an ANN instead. However, previous work on such off-line training methods has struggled in terms of poor modelling accuracy of the spiking neurons and high computational complexity. In this thesis we propose a simple and novel activation function, Noisy Softplus (NSP), to closely model the response firing activity of biologically-plausible spiking neurons, and introduce a generalised off-line training method using the Parametric Activation Function (PAF) to map the abstract numerical values of the ANN to concrete physical units, such as current and firing rate in the SNN. Based on this generalised training method and its fine tuning, we achieve the state-of-the-art accuracy on the MNIST classification task using spiking neurons, 99.07%, on a deep spiking convolutional neural network (ConvNet). We then take a step forward to ‘on-line’ training methods, where Deep Learning modules are trained purely on SNNs in an event-driven manner. Existing work has failed to provide SNNs with recognition accuracy equivalent to ANNs due to the lack of mathematical analysis. Thus we propose a formalised Spike-based Rate Multiplication (SRM) method which transforms the product of firing rates to the number of coincident spikes of a pair of rate-coded spike trains. Moreover, these coincident spikes can be captured by the Spike-Time-Dependent Plasticity (STDP) rule to update the weights between the neurons in an on-line, event-based, and biologically-plausible manner. Furthermore, we put forward solutions to reduce correlations between spike trains; thereby addressing the result of performance drop in on-line SNN training. The promising results of spiking Autoencoders (AEs) and Restricted Boltzmann Machines (SRBMs) exhibit equivalent, sometimes even superior, classification and reconstruction capabilities compared to their non-spiking counterparts. To provide meaningful comparisons between these proposed SNN models and other existing methods within this rapidly advancing field of NE, we propose a large dataset of spike-based visual stimuli and a corresponding evaluation methodology to estimate the overall performance of SNN models and their hardware implementations.
100

Reservoir-computing-based, biologically inspired artificial neural networks and their applications in power systems

Dai, Jing 05 April 2013 (has links)
Computational intelligence techniques, such as artificial neural networks (ANNs), have been widely used to improve the performance of power system monitoring and control. Although inspired by the neurons in the brain, ANNs are largely different from living neuron networks (LNNs) in many aspects. Due to the oversimplification, the huge computational potential of LNNs cannot be realized by ANNs. Therefore, a more brain-like artificial neural network is highly desired to bridge the gap between ANNs and LNNs. The focus of this research is to develop a biologically inspired artificial neural network (BIANN), which is not only biologically meaningful, but also computationally powerful. The BIANN can serve as a novel computational intelligence tool in monitoring, modeling and control of the power systems. A comprehensive survey of ANNs applications in power system is presented. It is shown that novel types of reservoir-computing-based ANNs, such as echo state networks (ESNs) and liquid state machines (LSMs), have stronger modeling capability than conventional ANNs. The feasibility of using ESNs as modeling and control tools is further investigated in two specific power system applications, namely, power system nonlinear load modeling for true load harmonic prediction and the closed-loop control of active filters for power quality assessment and enhancement. It is shown that in both applications, ESNs are capable of providing satisfactory performances with low computational requirements. A novel, more brain-like artificial neural network, i.e. biologically inspired artificial neural network (BIANN), is proposed in this dissertation to bridge the gap between ANNs and LNNs and provide a novel tool for monitoring and control in power systems. A comprehensive survey of the spiking models of living neurons as well as the coding approaches is presented to review the state-of-the-art in BIANN research. The proposed BIANNs are based on spiking models of living neurons with adoption of reservoir-computing approaches. It is shown that the proposed BIANNs have strong modeling capability and low computational requirements, which makes it a perfect candidate for online monitoring and control applications in power systems. BIANN-based modeling and control techniques are also proposed for power system applications. The proposed modeling and control schemes are validated for the modeling and control of a generator in a single-machine infinite-bus system under various operating conditions and disturbances. It is shown that the proposed BIANN-based technique can provide better control of the power system to enhance its reliability and tolerance to disturbances. To sum up, a novel, more brain-like artificial neural network, i.e. biologically inspired artificial neural network (BIANN), is proposed in this dissertation to bridge the gap between ANNs and LNNs and provide a novel tool for monitoring and control in power systems. It is clearly shown that the proposed BIANN-based modeling and control schemes can provide faster and more accurate control for power system applications. The conclusions, the recommendations for future research, as well as the major contributions of this research are presented at the end.

Page generated in 0.0357 seconds