• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • Tagged with
  • 14
  • 7
  • 7
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Plasticity in large-scale neuromorphic models of the neocortex

Knight, James January 2017 (has links)
The neocortex is the most recently evolved part of the mammalian brain and enables the intelligent, adaptable behaviour that has allowed mammals to conquer much of planet earth. The human neocortex consists of a thin sheet of neural tissue containing approximately 20*10^9 neurons. These neurons are connected by a dense network of highly plastic synapses whose efficacy and structure constantly change in response to internal and external stimuli. Understanding exactly how we perceive the world, plan our actions and use language, using this computational substrate, is one of the grand challenges of computing research. One of the ways to address this challenge is to build and simulate neural systems, an approach neuromorphic systems such as SpiNNaker are designed to enable. The basic computational unit of a SpiNNaker system is a general-purpose ARM processor, which allows it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of synaptic plasticity, which has been described using a plethora of models. In this thesis I present a new SpiNNaker synaptic plasticity implementation and, using this, develop a neocortically-inspired model of temporal sequence learning consisting of 2*10^4 neurons and 5.1*10^7 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. I then identify several problems that occur when using existing approaches to simulate such models on SpiNNaker before presenting a new, more flexible approach. This new approach not only solves many of these problems but also suggests directions for architectural improvements in future neuromorphic systems.
2

Real time Spaun on SpiNNaker : functional brain simulation on a massively-parallel computer architecture

Mundy, Andrew January 2017 (has links)
Model building is a fundamental scientific tool. Increasingly there is interest in building neurally-implemented models of cognitive processes with the intention of modelling brains. However, simulation of such models can be prohibitively expensive in both the time and energy required. For example, Spaun - "the world's first functional brain model", comprising 2.5 million neurons - required 2.5 hours of computation for every second of simulation on a large compute cluster. SpiNNaker is a massively parallel, low power architecture specifically designed for the simulation of large neural models in biological real time. Ideally, SpiNNaker could be used to facilitate rapid simulation of models such as Spaun. However the Neural Engineering Framework (NEF), with which Spaun is built, maps poorly to the architecture - to the extent that models such as Spaun would consume vast portions of SpiNNaker machines and still not run as fast as biology. This thesis investigates whether real time simulation of Spaun on SpiNNaker is at all possible. Three techniques which facilitate such a simulation are presented. The first reduces the memory, compute and network loads consumed by the NEF. Consequently, it is demonstrated that only a twentieth of the cores are required to simulate a core component of the Spaun network than would otherwise have been needed. The second technique uses a small number of additional cores to significantly reduce the network traffic required to simulated this core component. As a result simulation in real time is shown to be feasible. The final technique is a novel logic minimisation algorithm which reduces the size of the routing tables which are used to direct information around the SpiNNaker machine. This last technique is necessary to allow the routing of models of the scale and complexity of Spaun. Together these provide the ability to simulate the Spaun model in biological real time - representing a speed-up of 9000 times over previously reported results - with room for much larger models on full-scale SpiNNaker machines.
3

Integrating a SpiNNaker 2 prototype on an embedded platform : Hardware design and firmware modification / En inbäddad plattform med en SpiNNaker 2 prototypkrets : Hårdavarudesign och modifikation av inbyggd mjukvara

Hessel, Mikael January 2021 (has links)
The field of neuromorphic computing concerns simulating the information processing of a brain in software or hardware on a computing platform. One neuromorphic platform that uses specialized hardware is SpiNNaker. It is an integrated circuit consisting of multiple general purpose processing cores that can run simulations of neurons. A custom on-chip network mimics the high level of neuron interconnectedness in a brain. The second generation of this chip is currently in development and a prototype, JiB 2, is used in this thesis. This chip has a Ball Grid Array (BGA) footprint and requires several supply voltage levels to operate making implementation more complex. To use such a chip in an autonomous robot, the hardware needs to be in a small form factor. It is beneficial to use an intermediary platform with support for many actuators and sensors to avoid having to develop new drivers (and because the processing power of individual blocks in JiB 2 is not well suited to these tasks). This thesis shows how a platform for autonomous use in robots can be designed with the current prototype chip. It details the design decisions made for the power supply and using the footprint. The existing software is explained and modifications made are shown. Some performance metrics (memory requirements, power and cost) are characterized. A simple program running on the prototype chip with input and output from a microcontroller development board using STM32 is demonstrated. This project suggests a path to deploy software on the JiB 2 and let it interact with the physical world. / Att i en dator eller speciell hårdvara simulera hur neuroner i en hjärna interagerar i sitt informationsutbyte studeras inom fältet neouromorfisk databehandling. Eftersom utbytet sker med snabba länkar mellan många oberoende enheter är traditionell datorhårdvara inte lämpad att implementera sådana skeenden. Därför finns specialhårdvara som bättre efterlikar detta utbyte genom att, till exempel, använda många enkla processorkärnor (för att simulera neuroner) tillsammans med ett snabbt nätverk på kretsen (och mellan flera kretsar). Ett användningsområde är i större komplexa system men det finns en efterfrågan att kunna använda den även i mer begränsade kontexter. En sådan specialhårdvara är den integrerade kretsen SpiNNaker (Spiking Neural Network Architecture). En andra generationen av den kretsen är under utveckling och projektet i denna uppsats har arbetat med en begränsad prototyp kallad JiB2. Målet har varit att bygga en plattform som visar hur JiB 2 kan utnyttjas fristående i en robot. Detta kräver hårdvara som är möjlig att enkelt ladda med nya program. Den behöver klara att strömförsörja kretsen från exempelvis ett batteri. Den ska också ha möjlighet att koppla in- och utsignaler till programmet som körs i specialkretsen. Detta arbete visar att hårdvara går att tillverka i en storlek som lämpar sig för använding i robotar. Ett flöde för utveckling och drifttagning av programvara till plattformen demonstreras.
4

Computational methods for event-based signals and applications / Méthodes de calcul pour les signaux événementiels et applications

Lagorce, Xavier 22 September 2015 (has links)
Les neurosciences computationnelles sont une grande source d'inspiration pour le traitement de données. De nos jours, aussi bon que soit l'état de l'art de la vision par ordinateur, il reste moins performant que les possibilités offertes par nos cerveaux ou ceux d'autres animaux ou insectes. Cette thèse se base sur cette observation afin de développer de nouvelles méthodes de calcul pour la vision par ordinateur ainsi que pour le calcul de manière générale reposant sur les données issues de capteurs événementiels tels que les "rétines artificielles". Ces capteurs copient la biologie et sont utilisés dans ces travaux pour le caractère épars de leurs données ainsi que pour leur précision temporelle : l'information est codée dans des événements qui sont générés avec une précision de l'ordre de la microseconde. Ce concept ouvre les portes d'un paradigme complètement nouveau pour la vision par ordinateur, reposant sur le temps plutôt que sur des images. Ces capteurs ont été utilisés pour développer des applications comme le suivi ou la reconnaissance d'objets ou encore de l'extraction de motifs élémentaires. Des plate-formes de calcul neuromorphiques ont aussi été utilisées pour implémenter plus efficacement ces algorithmes, nous conduisant à repenser l'idée même du calcul. Les travaux présentés dans cette thèse proposent une nouvelle façon de penser la vision par ordinateur via des capteurs événementiels ainsi qu'un nouveau paradigme pour le calcul. Le temps remplace la mémoire permettant ainsi des opérations complètement locales, ce qui permet de réaliser des machines hautement parallèles avec une architecture non-Von Neumann. / Computational Neurosciences are a great source of inspiration for data processing and computation. Nowadays, how great the state of the art of computer vision might be, it is still way less performant that what our brains or the ones from other animals or insects are capable of. This thesis takes on this observation to develop new computational methods for computer vision and generic computation relying on data produced by event-based sensors such as the so called “silicon retinas”. These sensors mimic biology and are used in this work because of the sparseness of their data and their precise timing: information is coded into events which are generated with a microsecond precision. This opens doors to a whole new paradigm for machine vision, relying on time instead of using images. We use these sensors to develop applications such as object tracking or recognition and feature extraction. We also used computational neuromorphic platforms to better implement these algorithms which led us to rethink the idea of computation itself. This work proposes new ways of thinking computer vision via event-based sensors and a new paradigm for computation. Time is replacing memory to allow for completely local operations, enabling highly parallel machines in a non-Von Neumann architecture.
5

Scalability and robustness of artificial neural networks

Stromatias, Evangelos January 2016 (has links)
Artificial Neural Networks (ANNs) appear increasingly and routinely to gain popularity today, as they are being used in several diverse research fields and many different contexts, which may range from biological simulations and experiments on artificial neuronal models to machine learning models intended for industrial and engineering applications. One example is the recent success of Deep Learning architectures (e.g., Deep Belief Networks [DBN]), which appear in the spotlight of machine learning research, as they are capable of delivering state-of-the-art results in many domains. While the performance of such ANN architectures is greatly affected by their scale, their capacity for scalability both for training and during execution is limited by the increased power consumption and communication overheads, implicitly posing a limiting factor on their real-time performance. The on-going work on the design and construction of spike-based neuromorphic platforms offers an alternative for running large-scale neural networks, such as DBNs, with significantly lower power consumption and lower latencies, but has to overcome the hardware limitations and model specialisations imposed by these type of circuits. SpiNNaker is a novel massively parallel fully programmable and scalable architecture designed to enable real-time spiking neural network (SNN) simulations. These properties render SpiNNaker quite an attractive neuromorphic exploration platform for running large-scale ANNs, however, it is necessary to investigate thoroughly both its power requirements as well as its communication latencies. This research focusses on around two main aspects. First, it aims at characterising the power requirements and communication latencies of the SpiNNaker platform while running large-scale SNN simulations. The results of this investigation lead to the derivation of a power estimation model for the SpiNNaker system, a reduction of the overall power requirements and the characterisation of the intra- and inter-chip spike latencies. Then it focuses on a full characterisation of spiking DBNs, by developing a set of case studies in order to determine the impact of (a) the hardware bit precision; (b) the input noise; (c) weight variation; and (d) combinations of these on the classification performance of spiking DBNs for the problem of handwritten digit recognition. The results demonstrate that spiking DBNs can be realised on limited precision hardware platforms without drastic performance loss, and thus offer an excellent compromise between accuracy and low-power, low-latency execution. These studies intend to provide important guidelines for informing current and future efforts around developing custom large-scale digital and mixed-signal spiking neural network platforms.
6

Evaluation and Improvement of Application Deployment in Hybrid Edge Cloud Environment : Using OpenStack, Kubernetes, and Spinnaker

Jendi, Khaled January 2020 (has links)
Traditional mechanisms of deployment of deferent applications can be costly in terms of time and resources, especially when the application requires a specific environment to run upon and has a different kind of dependencies so to set up such an application, it would need an expert to find out all required dependencies. In addition, it is difficult to deploy applications with efficient usage of resources available in the distributed environment of the cloud. Deploying different projects on the same resources is a challenge. To solve this problem, we evaluated different deployment mechanisms using heterogeneous infrastructure-as-a-service (IaaS) called OpenStack and Microsoft Azure. we also used platform-as-a-service called Kubernetes. Finally, to automate and auto integrate deployments, we used Spinnaker as the continuous delivery framework. The goal of this thesis work is to evaluate and improve different deployment mechanisms in terms of edge cloud performance. Performance depends on achieving efficient usage of cloud resources, reducing latency, scalability, replication and rolling upgrade, load balancing between data nodes, high availability and measuring zero- downtime for deployed applications. These problems are solved basically by designing and deploying infrastructure and platform in which Kubernetes (PaaS) is deployed on top of OpenStack (IaaS). In addition, the usage of Docker containers rather than regular virtual machines (containers orchestration) will have a huge impact. The conclusion of the report would demonstrate and discuss the results along with various test cases regarding the usage of different methods of deployment, and the presentation of the deployment process. It includes also suggestions to develop more reliable and secure deployment in the future when having heterogeneous container orchestration infrastructure. / Traditionella mekanismer för utplacering av deferentapplikationer kan vara kostsamma när det gäller tid och resurser, särskilt när applikationen kräver en specifik miljö att löpa på och har en annan typ av beroende, så att en sådan applikation upprättas, skulle det behöva en expert att hitta ut alla nödvändiga beroenden. Dessutom är det svårt att distribuera applikationer med effektiv användning av resurser tillgängliga i molnens distribuerade i Edge Cloud Computing. Att distribuera olika projekt på samma resurser är en utmaning. För att lösa detta problem skulle jag utvärdera olika implementeringsmekanismer genom att använda heterogen infrastruktur-as-a-service (IaaS) som heter OpenStack och Microsoft Azure. Jag skulle också använda plattform-som-en-tjänst som heter Kubernetes. För att automatisera och automatiskt integrera implementeringar skulle jag använda Spinnaker som kontinuerlig leveransram. Målet med detta avhandlingsarbete är att utvärdera och förbättra olika implementeringsmekanismer när det gäller Edge Cloud prestanda. Prestanda beror på att du uppnår effektiv användning av Cloud resurser, reducerar latens, skalbarhet, replikering och rullningsuppgradering, lastbalansering mellan datodenoder, hög tillgänglighet och mätning av nollstanntid för distribuerade applikationer. Dessa problem löses i grunden genom att designa och distribuera infrastruktur och plattform där Kubernetes (PaaS) används på toppen av OpenStack (IaaS). Dessutom kommer användningen av Docker- behållare istället för vanliga virtuella maskiner (behållare orkestration) att ha en stor inverkan. Slutsatsen av rapporten skulle visa och diskutera resultaten tillsammans med olika testfall angående användningen av olika metoder för implementering och presentationen av installationsprocessen. Det innehåller också förslag på att utveckla mer tillförlitlig och säker implementering i framtiden när den har heterogen behållareorkesteringsinfrastruktur.
7

Implementation of bioinspired algorithms on the neuromorphic VLSI system SpiNNaker 2

Yan, Yexin 29 June 2023 (has links)
It is believed that neuromorphic hardware will accelerate neuroscience research and enable the next generation edge AI. On the other hand, brain-inspired algorithms are supposed to work efficiently on neuromorphic hardware. But both processes don't happen automatically. To efficiently bring together hardware and algorithm, optimizations are necessary based on the understanding of both sides. In this work, software frameworks and optimizations for efficient implementation of neural network-based algorithms on SpiNNaker 2 are proposed, resulting in optimized power consumption, memory footprint and computation time. In particular, first, a software framework including power management strategies is proposed to apply dynamic voltage and frequency scaling (DVFS) to the simulation of spiking neural networks, which is also the first-ever software framework running a neural network on SpiNNaker 2. The result shows the power consumption is reduced by 60.7% in the synfire chain benchmark. Second, numerical optimizations and data structure optimizations lead to an efficient implementation of reward-based synaptic sampling, which is one of the most complex plasticity algorithms ever implemented on neuromorphic hardware. The results show a reduction of computation time by a factor of 2 and energy consumption by 62%. Third, software optimizations are proposed which effectively exploit the efficiency of the multiply-accumulate array and the flexibility of the ARM core, which results in, when compared with Loihi, 3 times faster inference speed and 5 times lower energy consumption in a keyword spotting benchmark, and faster inference speed and lower energy consumption for adaptive control benchmark in high dimensional cases. The results of this work demonstrate the potential of SpiNNaker 2, explore its range of applications and also provide feedback for the design of the next generation neuromorphic hardware.
8

Etudes expérimentales de l'Interaction fluide-structure sur les voiles de bateaux au portant / Experimental studies of fluid-structure interaction on downwind sails

Deparday, Julien 06 July 2016 (has links)
Cette thèse présente une étude expérimentale sur un voilier instrumenté, menée pour décrire le comportement aéro-élastique des voiles et du gréement pour des navigations au portant. Les formes des voiles utilisées sont des surfaces non développables avec de fortes courbures provoquant une séparation massive de l’écoulement. De plus, les spinnakers sont des voiles fines et souples rendant l’interaction fluide-structure fortement couplée. A cause du non-respect de certaines règles de similitude, le comportement dynamique d’un spinnaker se prête mal à l’étude en soufflerie et nécessite une comparaison avec des mesures in-situ. Les simulations numériques instationnaires modélisant le comportement aéro-élastique des voiles et du gréement doivent être qualifiées et demandent également des validations. C’est pourquoi un système d’instrumentation embarquée est mis en place sur un J/80, un voilier de huit mètres de long. Il s’agit de mesurer dynamiquement la forme en navigation du spinnaker, les efforts dans les gréements dormant et courant, la répartition de pression sur la voile ainsi que le vent et les attitudes du bateau. La forme du spinnaker en navigation est obtenue grâce à un système de mesure photogrammétrique développé pendant la thèse. La précision de ce système, meilleure que 1,5%, permet de mesurer la forme générale de la voile ainsi que les déformations importantes telles que celles liées au faseyement du guindant. L’effort aérodynamique produit par le spinnaker est obtenu grâce à la mesure de l’intensité des efforts et de leurs directions aux trois extrémités (drisse, amure, écoute) ainsi que par la mesure des pressions sur la voile. Le comportement général du spinnaker est analysé en fonction de l’angle du vent apparent. Une nouvelle représentation utilisant les surfaces de Bézier triangulaires est développée pour décrire la forme tridimensionnelle du spinnaker. Quelques points de contrôles suffisent pour représenter la voile et caractériser le type de voile. Un comportement dynamique propre au spinnaker est également étudié. Le réglage supposé optimal d’un spinnaker est à la limite du faseyement, en laissant le guindant se replier légèrement. Cependant ce réglage n’a jamais été scientifiquement étudié auparavant. Nous avons montré qu’il s’agit d’une forte interaction fluide-structure tridimensionnelle où une importante dépression apparaît au bord d’attaque, qui augmente temporairement les efforts, ce qui n’est pas observé avec un réglage plus bordé. / A full-scale experimental study on an instrumented sailing yacht is conducted to better assess the aero-elastic behaviour of the sails and rigging in downwind navigations. The downwind sail shape is a non-developable surface with high curvature leading to massive flow separation. In addition, spinnakers are thin and flexible sails leading to a strongly coupled Fluid-Structure Interaction. Due to the non-respect of some rules of similitude, the unsteady behaviour of downwind sails cannot be easily investigated with wind tunnel tests that would need comparison with full-scale experiments. Moreover unsteady numerical simulations modelling the aero-elastic behaviour of the sails and rigging require validations. An inboard instrumentation system has been developed on a 8 meter J/80 sailboat to simultaneously and dynamically measure the flying shape of the spinnaker, the aerodynamic loads transmitted to the rigging, the pressure distribution on the sail as well as the boat and wind data. The shape of the spinnaker while sailing is acquired by a photogrammetric system developed during this PhD. The accuracy of this new system, better than 1.5%, is used to measure the global shape and the main dynamic deformations, such as the flapping of the luff. The aerodynamic load produced by the spinnaker is assessed by the measurements of the load magnitudes and directions on the three corners of the sail (head, tack and clew), and also by the pressure distribution on the spinnaker. The global behaviour of the spinnaker is analysed according to the apparent wind angle. A new representation using Bézier triangular surfaces defines the spinnaker 3D shape. A few control points enable to represent the sail and can easily characterise the type of sail. A typical unsteady behaviour of the spinnaker is also analysed. Letting the luff of the sail flap is known by sailors as the optimal trim but has never been scientifically studied before. It is found that it is a complex three dimensional fluid-structure interaction problem where a high suction near the leading edge occurs, producing a temporary increase of the force coefficient that would not be possible otherwise.
9

Building and operating large-scale SpiNNaker machines

Heathcote, Jonathan David January 2016 (has links)
SpiNNaker is an unconventional supercomputer architecture designed to simulate up to one billion biologically realistic neurons in real-time. To achieve this goal, SpiNNaker employs a novel network architecture which poses a number of practical problems in scaling up from desktop prototypes to machine room filling installations. SpiNNaker's hexagonal torus network topology has received mostly theoretical treatment in the literature. This thesis tackles some of the challenges encountered when building `real-world' systems. Firstly, a scheme is devised for physically laying out hexagonal torus topologies in machine rooms which avoids long cables; this is demonstrated on a half-million core SpiNNaker prototype. Secondly, to improve the performance of existing routing algorithms, a more efficient process is proposed for finding (logically) short paths through hexagonal torus topologies. This is complemented by a formula which provides routing algorithms with greater flexibility when finding paths, potentially resulting in a more balanced network utilisation. The scale of SpiNNaker's network and the models intended for it also present their own challenges. Placement and routing algorithms are developed which assign processes to nodes and generate paths through SpiNNaker's network. These algorithms minimise congestion and tolerate network faults. The proposed placement algorithm is inspired by techniques used in chip design and is shown to enable larger applications to run on SpiNNaker than the previous state-of-the-art. Likewise the routing algorithm developed is able to tolerate network faults, inevitably present in large-scale systems, with little performance overhead.
10

Evolution of spiking neural networks for temporal pattern recognition and animat control

Abdelmotaleb, Ahmed Mostafa Othman January 2016 (has links)
I extended an artificial life platform called GReaNs (the name stands for Gene Regulatory evolving artificial Networks) to explore the evolutionary abilities of biologically inspired Spiking Neural Network (SNN) model. The encoding of SNNs in GReaNs was inspired by the encoding of gene regulatory networks. As proof-of-principle, I used GReaNs to evolve SNNs to obtain a network with an output neuron which generates a predefined spike train in response to a specific input. Temporal pattern recognition was one of the main tasks during my studies. It is widely believed that nervous systems of biological organisms use temporal patterns of inputs to encode information. The learning technique used for temporal pattern recognition is not clear yet. I studied the ability to evolve spiking networks with different numbers of interneurons in the absence and the presence of noise to recognize predefined temporal patterns of inputs. Results showed, that in the presence of noise, it was possible to evolve successful networks. However, the networks with only one interneuron were not robust to noise. The foraging behaviour of many small animals depends mainly on their olfactory system. I explored whether it was possible to evolve SNNs able to control an agent to find food particles on 2-dimensional maps. Using ring rate encoding to encode the sensory information in the olfactory input neurons, I managed to obtain SNNs able to control an agent that could detect the position of the food particles and move toward it. Furthermore, I did unsuccessful attempts to use GReaNs to evolve an SNN able to control an agent able to collect sound sources from one type out of several sound types. Each sound type is represented as a pattern of different frequencies. In order to use the computational power of neuromorphic hardware, I integrated GReaNs with the SpiNNaker hardware system. Only the simulation part was carried out using SpiNNaker, but the rest steps of the genetic algorithm were done with GReaNs.

Page generated in 0.0378 seconds