Spelling suggestions: "subject:"spiking"" "subject:"spikings""
121 |
Transferência de frequência em modelos de neurônios de disparo / Frequency transfer of spiking neurons modelsGewers, Felipe Lucas 25 February 2019 (has links)
Este trabalho trata sobre a transferência de frequência em neurônios de disparo, especificamente neurônios integra-e-dispara com escoamento e neurônios de Izhikevich. Através de análises matemáticas analíticas e sistemáticas simulações numéricas é obtida a função de ganho, a transferência de frequência estacionária e dinâmica dos neurônios utilizados, para diversos valores dos parâmetros do modelo. Desse modo, são realizados múltiplos ajustes às curvas obtidas, e os coeficientes estimados são apresentados. Com base em todos esses dados, são obtidas diversas características dessas relações de transferência de frequência, e como suas propriedades variam com relação aos principais parâmetros do modelo de neurônio e sinapse utilizados. Diversos resultados interessantes foram apresentados, incluindo evidências de que a função ganho do neurônio integra-e-dispara pode se comportar de modo bastante semelhante à função de ganho e transferência estacionária do neurônio de Izhikevich, dependendo dos parâmetros adotados; a divisão do plano de parâmetros do modelo integra-e-dispara de acordo com a linearidade da transferência de frequência dinâmica; o limiar da intensidade de corrente contínua e de frequência de spikes pré-sinápticos de um neurônio de Izhikevich é determinado apenas pelo parâmetro b, no intervalo de parâmetros usual; modelos de sinapses distintos tendem a não alterar a forma da transferência de frequência estacionária de um neurônio de Izhikevich. / This work is about the frequency transfer of spiking neurons, specifically integrate-and-fire neurons and Izhikevich neurons. Through analytical and systematic numerical simulations the gain function, the stationary and dynamic frequency transfer of the adopted neuron models, are obtained for several values of the model parameters. Thus, multiple fits are made to the curves obtained, and the estimated coefficients are presented. Based on all these data, several characteristics of the frequency transfer relations are obtained, and information is obtained about how their properties vary with respect the parameters of the adopted neuron and synapse model. Several interesting results have been presented, including evidences that the integrate-and-fire neuron\'s gain function can behave quite similarly to the Izhikevich neuron\'s stationary transfer and gain function, depending of the adopted parameters. We also obtained the division of the parameters plane of integrate-and-fire model according to the linearity of the dynamic frequency transfer. It was also verified that the thresholds of the presynaptic spikes\' current intensity and frequency of an Izhikevich neuron are determined only by the parameter b, in the usual parameter range. In addition, it was observed that the considered distinct synapses models tend not to depart from the stationary frequency transfer of an Izhikevich neuron.
|
122 |
Controle de posição com múltiplos sensores em um robô colaborativo utilizando liquid state machinesSala, Davi Alberto January 2017 (has links)
A ideia de usar redes neurais biologicamente inspiradas na computação tem sido amplamente utilizada nas últimas décadas. O fato essencial neste paradigma é que um neurônio pode integrar e processar informações, e esta informação pode ser revelada por sua atividade de pulsos. Ao descrever a dinâmica de um único neurônio usando um modelo matemático, uma rede pode ser implementada utilizando um conjunto desses neurônios, onde a atividade pulsante de cada neurônio irá conter contribuições, ou informações, da atividade pulsante da rede em que está inserido. Neste trabalho é apresentado um controlador de posição no eixo Z utilizando fusão de sensores baseado no paradigma de Redes Neurais Recorrentes. O sistema proposto utiliza uma Máquina de Estado Líquido (LSM) para controlar o robô colaborativo BAXTER. O framework foi projetado para trabalhar em paralelo com as LSMs que executam trajetórias em formas fechadas de duas dimensões, com o objetivo de manter uma caneta de feltro em contato com a superfície de desenho, dados de sensores de força e distância são alimentados ao controlador. O sistema foi treinado utilizando dados de um controlador Proporcional Integral Derivativo (PID), fundindo dados de ambos sensores. Resultados mostram que a LSM foi capaz de aprender o comportamento do controlador PID em diferentes situações. / The idea of employing biologically inspired neural networks to perform computation has been widely used over the last decades. The essential fact in this paradigm is that a neuron can integrate and process information, and this information can be revealed by its spiking activity. By describing the dynamics of a single neuron using a mathematical model, a network in which the spiking activity of every single neuron will get contributions, or information, from the spiking activity of the embedded network. A positioning controller based on Spiking Neural Networks for sensor fusion suitable to run on a neuromorphic computer is presented in this work. The proposed framework uses the paradigm of reservoir computing to control the collaborative robot BAXTER. The system was designed to work in parallel with Liquid State Machines that performs trajectories in 2D closed shapes. In order to keep a felt pen touching a drawing surface, data from sensors of force and distance are fed to the controller. The system was trained using data from a Proportional Integral Derivative controller, merging the data from both sensors. The results show that the LSM can learn the behavior of a PID controller on di erent situations.
|
123 |
Contribution à la conception d'architecture de calcul auto-adaptative intégrant des nanocomposants neuromorphiques et applications potentielles / Adaptive Computing Architectures Based on Nano-fabricated ComponentsBichler, Olivier 14 November 2012 (has links)
Dans cette thèse, nous étudions les applications potentielles des nano-dispositifs mémoires émergents dans les architectures de calcul. Nous montrons que des architectures neuro-inspirées pourraient apporter l'efficacité et l'adaptabilité nécessaires à des applications de traitement et de classification complexes pour la perception visuelle et sonore. Cela, à un cout moindre en termes de consommation énergétique et de surface silicium que les architectures de type Von Neumann, grâce à une utilisation synaptique de ces nano-dispositifs. Ces travaux se focalisent sur les dispositifs dit «memristifs», récemment (ré)-introduits avec la découverte du memristor en 2008 et leur utilisation comme synapse dans des réseaux de neurones impulsionnels. Cela concerne la plupart des technologies mémoire émergentes : mémoire à changement de phase – «Phase-Change Memory» (PCM), «Conductive-Bridging RAM» (CBRAM), mémoire résistive – «Resistive RAM» (RRAM)... Ces dispositifs sont bien adaptés pour l'implémentation d'algorithmes d'apprentissage non supervisés issus des neurosciences, comme «Spike-Timing-Dependent Plasticity» (STDP), ne nécessitant que peu de circuit de contrôle. L'intégration de dispositifs memristifs dans des matrices, ou «crossbar», pourrait en outre permettre d'atteindre l'énorme densité d'intégration nécessaire pour ce type d'implémentation (plusieurs milliers de synapses par neurone), qui reste hors de portée d'une technologie purement en «Complementary Metal Oxide Semiconductor» (CMOS). C'est l'une des raisons majeures pour lesquelles les réseaux de neurones basés sur la technologie CMOS n'ont pas eu le succès escompté dans les années 1990. A cela s'ajoute la relative complexité et inefficacité de l'algorithme d'apprentissage de rétro-propagation du gradient, et ce malgré tous les aspects prometteurs des architectures neuro-inspirées, tels que l'adaptabilité et la tolérance aux fautes. Dans ces travaux, nous proposons des modèles synaptiques de dispositifs memristifs et des méthodologies de simulation pour des architectures les exploitant. Des architectures neuro-inspirées de nouvelle génération sont introduites et simulées pour le traitement de données naturelles. Celles-ci tirent profit des caractéristiques synaptiques des nano-dispositifs memristifs, combinées avec les dernières avancées dans les neurosciences. Nous proposons enfin des implémentations matérielles adaptées pour plusieurs types de dispositifs. Nous évaluons leur potentiel en termes d'intégration, d'efficacité énergétique et également leur tolérance à la variabilité et aux défauts inhérents à l'échelle nano-métrique de ces dispositifs. Ce dernier point est d'une importance capitale, puisqu'il constitue aujourd'hui encore la principale difficulté pour l'intégration de ces technologies émergentes dans des mémoires numériques. / In this thesis, we study the potential applications of emerging memory nano-devices in computing architecture. More precisely, we show that neuro-inspired architectural paradigms could provide the efficiency and adaptability required in some complex image/audio processing and classification applications. This, at a much lower cost in terms of power consumption and silicon area than current Von Neumann-derived architectures, thanks to a synaptic-like usage of these memory nano-devices. This work is focusing on memristive nano-devices, recently (re-)introduced by the discovery of the memristor in 2008 and their use as synapses in spiking neural network. In fact, this includes most of the emerging memory technologies: Phase-Change Memory (PCM), Conductive-Bridging RAM (CBRAM), Resistive RAM (RRAM)... These devices are particularly suitable for the implementation of natural unsupervised learning algorithms like Spike-Timing-Dependent Plasticity (STDP), requiring very little control circuitry.The integration of memristive devices in crossbar array could provide the huge density required by this type of architecture (several thousand synapses per neuron), which is impossible to match with a CMOS-only implementation. This can be seen as one of the main factors that hindered the rise of CMOS-based neural network computing architectures in the nineties, among the relative complexity and inefficiency of the back-propagation learning algorithm, despite all the promising aspects of such neuro-inspired architectures, like adaptability and fault-tolerance. In this work, we propose synaptic models for memristive devices and simulation methodologies for architectural design exploiting them. Novel neuro-inspired architectures are introduced and simulated for natural data processing. They exploit the synaptic characteristics of memristives nano-devices, along with the latest progresses in neurosciences. Finally, we propose hardware implementations for several device types. We assess their scalability and power efficiency potential, and their robustness to variability and faults, which are unavoidable at the nanometric scale of these devices. This last point is of prime importance, as it constitutes today the main difficulty for the integration of these emerging technologies in digital memories.
|
124 |
Utilisation des nano-composants électroniques dans les architectures de traitement associées aux imageurs / Integration of memory nano-devices in image sensors processing architectureRoclin, David 16 December 2014 (has links)
En utilisant les méthodes d’apprentissages tirées des récentes découvertes en neuroscience, les réseaux de neurones impulsionnels ont démontrés leurs capacités à analyser efficacement les grandes quantités d’informations provenant de notre environnement. L’implémentation de ces circuits à l’aide de processeurs classiques ne permet pas d’exploiter efficacement leur parallélisme. L’utilisation de mémoire numérique pour implémenter les poids synaptique ne permet pas la lecture ou la programmation parallèle des synapses et est limité par la bande passante reliant la mémoire à l’unité de calcul. Les technologies mémoire de type memristive pourrait permettre l’implémentation de ce parallélisme au coeur de la mémoire.Dans cette thèse, nous envisageons le développement d’un réseau de neurones impulsionnels dédié au monde de l’embarqué à base de dispositif mémoire émergents. Dans un premier temps, nous avons analysé un réseau impulsionnel afin d’optimiser ses différentes composantes : neurone, synapse et méthode d’apprentissage STDP en vue d’une implémentation numérique. Dans un second temps, nous envisageons l’implémentation de la mémoire synaptique par des dispositifs memristifs. Enfin, nous présentons le développement d’une puce co-intégrant des neurones implémentés en CMOS avec des synapses en technologie CBRAM. / By using learning mechanisms extracted from recent discoveries in neuroscience, spiking neural networks have demonstrated their ability to efficiently analyze the large amount of data from our environment. The implementation of such circuits on conventional processors does not allow the efficient exploitation of their parallelism. The use of digital memory to implement the synaptic weight does not allow the parallel reading or the parallel programming of the synapses and it is limited by the bandwidth of the connection between the memory and the processing unit. Emergent memristive memory technologies could allow implementing this parallelism directly in the heart of the memory.In this thesis, we consider the development of an embedded spiking neural network based on emerging memory devices. First, we analyze a spiking network to optimize its different components: the neuron, the synapse and the STDP learning mechanism for digital implementation. Then, we consider implementing the synaptic memory with emergent memristive devices. Finally, we present the development of a neuromorphic chip co-integrating CMOS neurons with CBRAM synapses.
|
125 |
Aprendizado não-supervisionado em redes neurais pulsadas de base radial. / Unsupervised learning in pulsed neural networks with radial basis function.Alexandre da Silva Simões 07 April 2006 (has links)
Redes neurais pulsadas - redes que utilizam uma codificação temporal da informação - têm despontado como uma nova e promissora abordagem dentro do paradigma conexionista emergente da ciência cognitiva. Um desses novos modelos é a rede neural pulsada de base radial, capaz de armazenar informação nos tempos de atraso axonais dos neurônios e que comporta algoritmos explícitos de treinamento. A recente proposição de uma sistemática para a codificação temporal dos dados de entrada utilizando campos receptivos gaussianos tem apresentado interessantes resultados na tarefa do agrupamento de dados (clustering). Este trabalho propõe uma função para o aprendizado não supervisionado dessa rede, com o objetivo de simplificar a sistemática de calibração de alguns dos seus parâmetros-chave, aprimorando a convergência da rede neural pulsada no aprendizado baseado em instâncias. O desempenho desse modelo é avaliado na tarefa de classificação de padrões, particularmente na classificação de pixels em imagens coloridas no domínio da visão computacional. / Pulsed neural networks - networks that encode information in the timing of spikes - have been studied as a new and promising approach in the artificial neural networks paradigm, emergent from cognitive science. One of these new models is the pulsed neural network with radial basis function, a network able to store information in the axonal propagation delay of neurons. Recently, a new method for encoding input-data by population code using gaussian receptive fields has showed interesting results in the clustering task. The present work proposes a function for the unsupervised learning task in this network, which goal includes the simplification of the calibration of the network key parameters and the enhancement of the pulsed neural network convergence to instance based learning. The performance of this model is evaluated for pattern classification, particularly for the pixel colors classification task, in the computer vision domain.
|
126 |
Computação por assembleias neurais em redes neurais pulsadas. / Computing with neural assemblies in spiking neural networks.João Henrique Ranhel Ribeiro 05 December 2011 (has links)
Um dos grandes mistérios da ciência é compreender como sistemas nervosos são capazes de realizar as extraordinárias operações computacionais que realizam. Provavelmente, encéfalos são as estruturas nas quais energia e matéria estão organizadas da forma mais complexa no universo. Central na computação cerebral está o conceito de neurônio. A forma como neurônios computam é motivo de intensa investigação científica. Um consenso atual é que neurônios formam grupos transientes (assembleias) a fim de representar coisas, de realizar operações computacionais, e de executar processos cognitivos; embora os mecanismos que fundamentam a computação por assembleias ainda não seja bem compreendido. Aqui é proposta uma forma pela qual se explica como computação por assembleias pode acontecer. Dois componentes são fundamentais para formação de coalizões neurais: a relação temporal entre grupos de neurônios e o fator de acoplamento entre eles. Assembleias pressupõe neurônios pulsantes; portanto, simulamos computação por assembleias em redes neurais pulsantes. A abordagem usada nesta tese é funcional; apresentamos um arcabouço teórico sobre propriedades, princípios, e dinâmicas que permitem operações computacionais por coalizões neurais. É apresentado na tese que: (i) quando neurônios formam assembleias está implícito que um tipo de função lógica estocástica ocorre, (ii) assembleias podem formar grupos com feedback, criando grupos biestáveis, (iii) grupos biestáveis criam representações internas dos eventos que os criaram, (iv) assembleias podem se ramificar e também dissolver outras assembleias, o que dá origem a algoritmos complexos. Esta é uma investigação inicial sobre computação em assembleias neurais, e há muito a ser feito. Nesta tese apresentamos os conceitos basais para esta nova abordagem. Há um conjunto de programas nos apêndices que permitem ao leitor simular formações de assembleias, ramificações, inibições, reverberações, entre outras propriedades e componentes de nossa proposta. / One of the greatest mysteries in science is to comprehend how brains are capable of realizing the extraordinary computational operations they do. Probably, brains are the structures in which matter and energy are organized in the most complex way in the Universe. Central to the brain computation is the concept of neuron. How neurons compute is motive of intensive scientific investigation. A prevailing consensus is that neurons form transient groups (assemblies) in order to represent things, for realizing computational operations, and for executing cognitive processes; although the mechanisms that substantiate such computation by neural assemblies are not yet well understood. In this thesis we propose a form that explains how neural assembly computation may occur. It is shown that two components are fundamentals for neural coalition formation: the temporal relation among neural groups, and the coupling factor among them. In this sense, neural assemblies presuppose spiking neurons; therefore, here we simulate assembly computing using spiking neural networks. In this thesis it is presented basically a functional approach; thus, it presents a theoretical approach concerning the properties, principles, characteristics, and components that allow the computational operations in neural coalitions. It is presented in the thesis that: (i) as neurons form assemblies it is implicit that a kind of stochastic logic function occurs; (ii) assemblies may form groups that feedback each other, creating bistable groups; (iii) bistable groups internally represent the event that created them; (iv) assemblies may branch and dissolve other assemblies, what give rise to complex algorithms. This is an initial investigation about neural assembly computing and there is a lot to be done; however, in this thesis we present the basal concepts for this new approach. There are programs in the appendices that allow the reader to simulate assembly formation, branching, inhibition, reverberation, among other properties and components in our proposal.
|
127 |
Computação por assembleias neurais em redes neurais pulsadas. / Computing with neural assemblies in spiking neural networks.Ribeiro, João Henrique Ranhel 05 December 2011 (has links)
Um dos grandes mistérios da ciência é compreender como sistemas nervosos são capazes de realizar as extraordinárias operações computacionais que realizam. Provavelmente, encéfalos são as estruturas nas quais energia e matéria estão organizadas da forma mais complexa no universo. Central na computação cerebral está o conceito de neurônio. A forma como neurônios computam é motivo de intensa investigação científica. Um consenso atual é que neurônios formam grupos transientes (assembleias) a fim de representar coisas, de realizar operações computacionais, e de executar processos cognitivos; embora os mecanismos que fundamentam a computação por assembleias ainda não seja bem compreendido. Aqui é proposta uma forma pela qual se explica como computação por assembleias pode acontecer. Dois componentes são fundamentais para formação de coalizões neurais: a relação temporal entre grupos de neurônios e o fator de acoplamento entre eles. Assembleias pressupõe neurônios pulsantes; portanto, simulamos computação por assembleias em redes neurais pulsantes. A abordagem usada nesta tese é funcional; apresentamos um arcabouço teórico sobre propriedades, princípios, e dinâmicas que permitem operações computacionais por coalizões neurais. É apresentado na tese que: (i) quando neurônios formam assembleias está implícito que um tipo de função lógica estocástica ocorre, (ii) assembleias podem formar grupos com feedback, criando grupos biestáveis, (iii) grupos biestáveis criam representações internas dos eventos que os criaram, (iv) assembleias podem se ramificar e também dissolver outras assembleias, o que dá origem a algoritmos complexos. Esta é uma investigação inicial sobre computação em assembleias neurais, e há muito a ser feito. Nesta tese apresentamos os conceitos basais para esta nova abordagem. Há um conjunto de programas nos apêndices que permitem ao leitor simular formações de assembleias, ramificações, inibições, reverberações, entre outras propriedades e componentes de nossa proposta. / One of the greatest mysteries in science is to comprehend how brains are capable of realizing the extraordinary computational operations they do. Probably, brains are the structures in which matter and energy are organized in the most complex way in the Universe. Central to the brain computation is the concept of neuron. How neurons compute is motive of intensive scientific investigation. A prevailing consensus is that neurons form transient groups (assemblies) in order to represent things, for realizing computational operations, and for executing cognitive processes; although the mechanisms that substantiate such computation by neural assemblies are not yet well understood. In this thesis we propose a form that explains how neural assembly computation may occur. It is shown that two components are fundamentals for neural coalition formation: the temporal relation among neural groups, and the coupling factor among them. In this sense, neural assemblies presuppose spiking neurons; therefore, here we simulate assembly computing using spiking neural networks. In this thesis it is presented basically a functional approach; thus, it presents a theoretical approach concerning the properties, principles, characteristics, and components that allow the computational operations in neural coalitions. It is presented in the thesis that: (i) as neurons form assemblies it is implicit that a kind of stochastic logic function occurs; (ii) assemblies may form groups that feedback each other, creating bistable groups; (iii) bistable groups internally represent the event that created them; (iv) assemblies may branch and dissolve other assemblies, what give rise to complex algorithms. This is an initial investigation about neural assembly computing and there is a lot to be done; however, in this thesis we present the basal concepts for this new approach. There are programs in the appendices that allow the reader to simulate assembly formation, branching, inhibition, reverberation, among other properties and components in our proposal.
|
128 |
Développement d'un réseau de neurones impulsionnels sur silicium à synapses memristives / Development of a silicon spiking neural network with memristives synapsesLecerf, Gwendal 29 September 2014 (has links)
Durant ces trois années de doctorat, financées par le projet ANR MHANN (MemristiveHardware Analog Neural Network), nous nous sommes intéressés au développement d’une nouvelle architecture de calculateur à l’aide de réseaux de neurones. Les réseaux de neurones artificiels sont particulièrement bien adaptés à la reconnaissance d’images et peuvent être utilisés en complément des processeurs séquentiels. En 2008, une nouvelle technologie de composant a vu le jour : le memristor. Classé comme étant le quatrième élément passif, il est possible de modifier sa résistance en fonction de la densité de courant qui le traverse et de garder en mémoire ces changements. Grâce à leurs propriétés, les composants memristifs sont des candidats idéaux pour jouer le rôle des synapses au sein des réseaux de neurones artificiels. En effectuant des mesures sur la technologie des memristors ferroélectriques de l’UMjCNRS/Thalès de l’équipe de Julie Grollier, nous avons pu démontrer qu’il était possible d’obtenir un apprentissage de type STDP (Spike Timing Dependant Plasticity) classiquement utilisé avec les réseaux de neurones impulsionnels. Cette forme d’apprentissage, inspirée de la biologie, impose une variation des poids synaptiques en fonction des évènements neuronaux. En s’appuyant sur les mesures réalisées sur ces memristors et sur des simulations provenant d’un programme élaboré avec nos partenaires de l’INRIA Saclay, nous avons conçu successivement deux puces en silicium pour deux technologies de memristors ferroélectriques. La première technologie (BTO), moins performante, a été mise de côté au profit d’une seconde technologie (BFO). La seconde puce a été élaborée avec les retours d’expérience de la première puce. Elle contient deux couches d’un réseau de neurones impulsionnels dédié à l’apprentissage d’images de 81 pixels. En la connectant à un boitier contenant un crossbar de memristors, nous pourrons réaliser un démonstrateur d’un réseau de neurones hybride réalisé avec des synapses memristives ferroélectriques. / Supported financially by ANR MHANN project, this work proposes an architecture ofspiking neural network in order to recognize pictures, where traditional processing units are inefficient regarding this. In 2008, a new passive electrical component had been discovered : the memristor. Its resistance can be adjusted by applying a potential between its terminals. Behaving intrinsically as artificial synapses, memristives devices can be used inside artificial neural networks.We measure the variation in resistance of a ferroelectric memristor (obtained from UMjCNRS/Thalès) similar to the biological law STDP (Spike Timing Dependant Plasticity) used with spiking neurons. With our measurements on the memristor and our network simulation (aided by INRIASaclay) we designed successively two versions of the IC. The second IC design is driven by specifications of the first IC with additional functionalists. The second IC contains two layers of a spiking neural network dedicated to learn a picture of 81 pixels. A demonstrator of hybrid neural networks will be achieved by integrating a chip of memristive crossbar interfaced with thesecond IC.
|
129 |
Energy-Efficient Private Forecasting on Health Data using SNNs / Energieffektiv privat prognos om hälsodata med hjälp av SNNsDi Matteo, Davide January 2022 (has links)
Health monitoring devices, such as Fitbit, are gaining popularity both as wellness tools and as a source of information for healthcare decisions. Predicting such wellness goals accurately is critical for the users to make informed lifestyle choices. The core objective of this thesis is to design and implement such a system that takes energy consumption and privacy into account. This research is modelled as a time-series forecasting problem that makes use of Spiking Neural Networks (SNNs) due to their proven energy-saving capabilities. Thanks to their design that closely mimics natural neural networks (such as the brain), SNNs have the potential to significantly outperform classic Artificial Neural Networks in terms of energy consumption and robustness. In order to prove our hypotheses, a previous research by Sonia et al. [1] in the same domain and with the same dataset is used as our starting point, where a private forecasting system using Long short-term memory (LSTM) is designed and implemented. Their study also implements and evaluates a clustering federated learning approach, which fits well the highly distributed data. The results obtained in their research act as a baseline to compare our results in terms of accuracy, training time, model size and estimated energy consumed. Our experiments show that Spiking Neural Networks trades off accuracy (2.19x, 1.19x, 4.13x, 1.16x greater Root Mean Square Error (RMSE) for macronutrients, calories burned, resting heart rate, and active minutes respectively), to grant a smaller model (19% less parameters an 77% lighter in memory) and a 43% faster training. Our model is estimated to consume 3.36μJ per inference, which is much lighter than traditional Artificial Neural Networks (ANNs) [2]. The data recorded by health monitoring devices is vastly distributed in the real-world. Moreover, with such sensitive recorded information, there are many possible implications to consider. For these reasons, we apply the clustering federated learning implementation [1] to our use-case. However, it can be challenging to adopt such techniques since it can be difficult to learn from data sequences that are non-regular. We use a two-step streaming clustering approach to classify customers based on their eating and exercise habits. It has been shown that training different models for each group of users is useful, particularly in terms of training time; however this is strongly dependent on the cluster size. Our experiments conclude that there is a decrease in error and training time if the clusters contain enough data to train the models. Finally, this study addresses the issue of data privacy by using state of-the-art differential privacy. We apply e-differential privacy to both our baseline model (trained on the whole dataset) and our federated learning based approach. With a differential privacy of ∈= 0.1 our experiments report an increase in the measured average error (RMSE) of only 25%. Specifically, +23.13%, 25.71%, +29.87%, 21.57% for macronutrients (grams), calories burned (kCal), resting heart rate (beats per minute (bpm), and minutes (minutes) respectively. / Hälsoövervakningsenheter, som Fitbit, blir allt populärare både som friskvårdsverktyg och som informationskälla för vårdbeslut. Att förutsäga sådana välbefinnandemål korrekt är avgörande för att användarna ska kunna göra välgrundade livsstilsval. Kärnmålet med denna avhandling är att designa och implementera ett sådant system som tar hänsyn till energiförbrukning och integritet. Denna forskning är modellerad som ett tidsserieprognosproblem som använder sig av SNNs på grund av deras bevisade energibesparingsförmåga. Tack vare deras design som nära efterliknar naturliga neurala nätverk (som hjärnan) har SNNs potentialen att avsevärt överträffa klassiska artificiella neurala nätverk när det gäller energiförbrukning och robusthet. För att bevisa våra hypoteser har en tidigare forskning av Sonia et al. [1] i samma domän och med samma dataset används som utgångspunkt, där ett privat prognossystem som använder LSTM designas och implementeras. Deras studie implementerar och utvärderar också en klustringsstrategi för federerad inlärning, som passar väl in på den mycket distribuerade data. Resultaten som erhållits i deras forskning fungerar som en baslinje för att jämföra våra resultat vad gäller noggrannhet, träningstid, modellstorlek och uppskattad energiförbrukning. Våra experiment visar att Spiking Neural Networks byter ut precision (2,19x, 1,19x, 4,13x, 1,16x större RMSE för makronäringsämnen, förbrända kalorier, vilopuls respektive aktiva minuter), för att ge en mindre modell ( 19% mindre parametrar, 77% lättare i minnet) och 43% snabbare träning. Vår modell beräknas förbruka 3, 36μJ, vilket är mycket lättare än traditionella ANNs [2]. Data som registreras av hälsoövervakningsenheter är enormt spridda i den verkliga världen. Dessutom, med sådan känslig registrerad information finns det många möjliga konsekvenser att överväga. Av dessa skäl tillämpar vi klustringsimplementeringen för federerad inlärning [1] på vårt användningsfall. Det kan dock vara utmanande att använda sådana tekniker eftersom det kan vara svårt att lära sig av datasekvenser som är oregelbundna. Vi använder en tvåstegs streaming-klustringsmetod för att klassificera kunder baserat på deras mat- och träningsvanor. Det har visat sig att det är användbart att träna olika modeller för varje grupp av användare, särskilt när det gäller utbildningstid; detta är dock starkt beroende av klustrets storlek. Våra experiment drar slutsatsen att det finns en minskning av fel och träningstid om klustren innehåller tillräckligt med data för att träna modellerna. Slutligen tar denna studie upp frågan om datasekretess genom att använda den senaste differentiell integritet. Vi tillämpar e-differentiell integritet på både vår baslinjemodell (utbildad på hela datasetet) och vår federerade inlärningsbaserade metod. Med en differentiell integritet på ∈= 0.1 rapporterar våra experiment en ökning av det uppmätta medelfelet (RMSE) på endast 25%. Specifikt +23,13%, 25,71%, +29,87%, 21,57% för makronäringsämnen (gram), förbrända kalorier (kCal), vilopuls (bpm och minuter (minuter).
|
130 |
Neural Coding Strategies in Cortico-Striatal Circuits Subserving Interval TimingCheng, Ruey-Kuang January 2010 (has links)
<p>Interval timing, defined as timing and time perception in the seconds-to-minutes range, is a higher-order cognitive function that has been shown to be critically dependent upon cortico-striatal circuits in the brain. However, our understanding of how different neuronal subtypes within these circuits cooperate to subserve interval timing remains elusive. The present study was designed to investigate this issue by focusing on the spike waveforms of neurons and their synchronous firing patterns with local field potentials (LFPs) recorded from cortico-striatal circuits while rats were performing two standard interval-timing tasks. Experiment 1 demonstrated that neurons in cortico-striatal circuits can be classified into 4 different clusters based on their distinct spike waveforms and behavioral correlates. These distinct neuronal populations were shown to be differentially involved in timing and reward processing. More importantly, the LFP-spike synchrony data suggested that neurons in 1 particular cluster were putative fast-spiking interneurons (FSIs) in the striatum and these neurons responded to both timing and reward processing. Experiment 2 reported electrophysiological data that were similar with previous findings, but identified a different cluster of striatal neurons - putative tonically-active neurons (TANs), revealed by their distinct spike waveforms and special firing patterns during the acquisition of the task. These firing patterns of FSIs and TANs were in contrast with potential striatal medium-spiny neurons (MSNs) that preferentially responded to temporal processing in the current study. Experiment 3 further investigated the proposal that interval timing is subserved by cortico-striatal circuits by using microstimulation. The findings revealed a stimulation frequency-dependent "stop" or "reset" response pattern in rats receiving microstimulation in either the cortex or the striatum during the performance of the timing task. Taken together, the current findings further support that interval timing is represented in cortico-striatal networks that involve multiple types of interneurons (e.g., FSIs and TANs) functionally connected with the principal projection neurons (i.e., MSNs) in the dorsal striatum. When specific components of these complex networks are electrically stimulated, the ongoing timing processes are temporarily "stopped" or "reset" depending on the properties of the stimulation.</p> / Dissertation
|
Page generated in 0.0636 seconds