• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31
  • 6
  • 3
  • 1
  • 1
  • Tagged with
  • 60
  • 60
  • 60
  • 18
  • 18
  • 16
  • 15
  • 14
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

HIGH PERFORMANCE AND ENERGY EFFICIENT DEEP LEARNING MODELS

Bing Han (12872594) 16 June 2022 (has links)
<p>Spiking Neural Networks (SNNs) have recently attracted significant research interest as the third generation of artificial neural networks that can enable low-power event-driven data analytics. We propose ANN-SNN conversion using “soft re-set” spiking neuron model, referred to as Residual Membrane Potential (RMP) spiking neuron, which retains the “resid- ual” membrane potential above threshold at the firing instants. In addition, we propose a time-based coding scheme, named Temporal-Switch-Coding (TSC), and a corresponding TSC spiking neuron model. Each input image pixel is presented using two spikes with opposite polarity and the timing between the two spiking instants is proportional to the pixel intensity. We demonstrate near loss-less ANN-SNN conversion using RMP neurons for VGG-16, ResNet-20, and ResNet-34 SNNs on challenging datasets including CIFAR-10, CIFAR-100, and ImageNet. With the help of TSC coding, it achieves 7-14.5× less inference latency, and 30-60× fewer addition operations and memory accesses per inference across datasets compared to the state of the art (SOTA) SNN models. In the second part of the thesis, we propose a new type of recurrent neural network (RNN) architecture, named Os- cillatory Fourier Neural Network (O-FNN). We demonstrate that O-FNN is mathematically equivalent to a simplified form of Discrete Fourier Transform applied onto periodical activa- tion. In particular, the computationally intensive back-propagation through time in training is eliminated, leading to faster training while achieving the SOTA inference accuracy in a diverse group of sequential tasks. For instance, applying the proposed model to sentiment analysis on IMDB review dataset reaches 89.4% test accuracy within 5 epochs, accompanied by over 35x reduction in the model size compared to Long Short-Term Memory (LSTM). The proposed novel RNN architecture is well poised for intelligent sequential processing in resource constrained hardware.</p>
52

Stimulus representation in anisotropically connected spiking neural networks / Representation av stimuli i anisotropiskt kopplade spikande neurala nätverk

Hiselius, Leo January 2021 (has links)
Biological neuronal networks are a key object of study in the field of computational neuroscience, and recent studies have also shown their potential applicability within artificial intelligence and robotics [1]. They come in many shapes and forms, and a well known and widely studied example is the liquid state machine from 2004 [2]. In 2019, a novel and simple connectivity rule was presented with the introduction of the SpreizerNet [3]. The connectivity of the SpreizerNet is governed by a type of gradient noise known as Perlin noise, and as such the connectivity is anisotropic but correlated. The spiking activity produced in the SpreizerNet is possibly functionally relevant, e.g. for motor control or classification of input stimuli. In 2020, it was shown to be useful for motor control [4]. In this Master’s thesis, we inquire if the spiking activity of the SpreizerNet is functionally relevant in the context of stimulus representation. We investigate how input stimulus from the MNIST handwritten digits dataset is represented in the spatio-temporal activity sequences produced by the SpreizerNet, and whether this representation is sufficient for separation. Furthermore, we consider how the parameters governing the local structure of connectivity impacts representation and separation. We find that (1) the SpreizerNet separates input stimulus at the initial stage after stimulus and (2) that separation decreases with time when the activity from dissimilar inputs becomes unified. / Biologiska neurala nätverk är ett centralt studieobjekt inom beräkningsneurovetenskapen, och nyliga studier har även visat deras potentiella applicerbarhet inom artificiell intelligens och robotik [1]. De kan formuleras på många olika sätt, och ett välkänt och vida studerat exempel är liquid state machine från 2004 [2]. 2019 presenterades en ny och enkel kopplingsregel i SpreizerNätverket [3]. Kopplingarna i SpreizerNätverket styrs av en typ av gradientbrus vid namn Perlinbrus, och som sådana är de anisotropiska men korrelerade. Spikdatan som genereras av SpreizerNätverket är möjligtvis betydelsefull för funktion, till exempel för motorisk kontroll eller separation av stimuli. 2020 visade Michaelis m. fl. att spikdatan var relevant för motorisk kontroll [4]. I denna masteruppsats frågar vi oss om spikdatan är funktionellt relevant för stimulusrepresentation. Vi undersöker hur stimulus från MNIST handwritten digits -datasetet representeras i de spatiotemporella aktivitetssekvenserna som genereras i SpreizerNätverket, och huruvida denna representation är tillräcklig för separation.Vidare betraktar vi hur parametrarna som styr den lokala kopplingsstrukturen påverkar representation och separation. Vi visar att (1) SpreizerNätverket separerar stimuli i ett initialt skede efter stimuli och (2) att separationen minskar med tid när aktiviteten från olika stimuli blir enhetlig.
53

Computational Principles of Neural Processing: modulating neural systems through temporally structured stimuli

Castellano, Marta 11 December 2014 (has links)
In order to understand how the neural system encodes and processes information, research has focused on the study of neural representations of simple stimuli, paying no particular attention to it's temporal structure, with the assumption that a deeper understanding of how the neural system processes simpli fied stimuli will lead to an understanding of how the brain functions as a whole [1]. However, time is intrinsically bound to neural processing as all sensory, motor, and cognitive processes are inherently dynamic. Despite the importance of neural and stimulus dynamics, little is known of how the neural system represents rich spatio-temporal stimulus, which ultimately link the neural system to a continuously changing environment. The purpose of this thesis is to understand whether and how temporally-structured neural activity modulates the processing of information within the brain, proposing in turn that, the precise interaction between the spatio-temporal structure of the stimulus and the neural system is particularly relevant, particularly when considering the ongoing plasticity mechanisms which allow the neural system to learn from experience. In order to answer these questions, three studies were conducted. First, we studied the impact of spiking temporal structure on a single neuron spiking response, and explored in which way the functional connections to pre-synaptic neurons are modulated through adaptation. Our results suggest that, in a generic spiking neuron, the temporal structure of pre-synaptic excitatory and inhibitory neurons modulate both the spiking response of that same neuron and, most importantly, the speed and strength of learning. In the second, we present a generic model of a spiking neural network that processes rich spatio-temporal stimuli, and explored whether the processing of stimulus within the network is modulated due to the interaction with an external dynamical system (i.e. extracellular media), as well as several plasticity mechanisms. Our results indicate that the memory capacity, that re ects a dynamic short-term memory of incoming stimuli, can be extended on the presence of plasticity and through the interaction with an external dynamical system, while maintaining the network dynamics in a regime suitable for information processing. Finally, we characterized cortical signals of human subjects (electroencephalography, EEG) associated to a visual categorization task. Among other aspects, we studied whether changes in the dynamics of the stimulus leads to a changes in the neural processing at the cortical level, and introduced the relevance of large-scale integration for cognitive processing. Our results suggest that the dynamic synchronization across distributed cortical areas is stimulus specific and specifically linked to perceptual grouping. Taken together, the results presented here suggest that the temporal structure of the stimulus modulates how the neural system encodes and processes information within single neurons, network of neurons and cortical areas. In particular, the results indicate that timing modulates single neuron connectivity structures, the memory capability of networks of neurons, and the cortical representation of a visual stimuli. While the learning of invariant representations remains as the best framework to account for a number of neural processes (e.g. long-term memory [2]), the reported studies seem to provide support the idea that, at least to some extent, the neural system functions in a non-stationary fashion, where the processing of information is modulated by the stimulus dynamics itself. Altogether, this thesis highlights the relevance of understanding adaptive processes and their interaction with the temporal structure of the stimulus, arguing that a further understanding how the neural system processes dynamic stimuli is crucial for the further understanding of neural processing itself, and any theory that aims to understand neural processing should consider the processing of dynamic signals. 1. Frankish, K., and Ramsey, W. The Cambridge Handbook of Cognitive Science. Cambridge University Press, 2012. // 2. McGaugh, J. L. Memory{a Century of Consolidation. Science 287, 5451 (Jan. 2000), 248{251.
54

[pt] MODELOS NEURO-EVOLUCIONÁRIOS DE REDES NEURAIS SPIKING APLICADOS AO PRÉ-DIAGNÓSTICO DE ENVELHECIMENTO VOCAL / [en] NEURO-EVOLUTIONARY OF SPIKING NEURAL NETWORKS APPLIED TO PRE-DIAGNOSIS OF VOCAL AGING

MARCO AURELIO BOTELHO DA SILVA 09 October 2015 (has links)
[pt] O envelhecimento da voz, conhecido como presbifonia, é um processo natural que pode causar grande modificação na qualidade vocal do indivíduo. A sua identificação precoce pode trazer benefícios, buscando tratamentos que possam prevenir o seu avanço. Esse trabalho tem como motivação a identificação de vozes com sinais de envelhecimento através de redes neurais do tipo Spiking (SNN). O objetivo principal é o de construir dois novos modelos, denominados híbridos, utilizando SNN para problemas de agrupamento, onde os atributos de entrada e os parâmetros que configuram a SNN são otimizados por algoritmos evolutivos. Mais especificamente, os modelos neuro-evolucionários propostos são utilizados com o propósito de configurar corretamente a SNN, e selecionar os atributos mais relevantes para a formação dos grupos. Os algoritmos evolutivos utilizados foram o Algoritmo Evolutivo com Inspiração Quântica com representação Binário-Real (AEIQ-BR) e o Optimization by Genetic Programming (OGP). Os modelos resultantes foram nomeados Quantum-Inspired Evolution of Spiking Neural Networks with Binary-Real (QbrSNN) e Spiking Neural Network Optimization by Genetic Programming (SNN-OGP). Foram utilizadas oito bases benchmark e duas bases de voz, masculinas e femininas, a fim de caracterizar o envelhecimento. Para uma análise funcional da SNN, as bases benchmark forma testadas com uma abordagem clássica de agrupamento (kmeans) e com uma SNN sem evolução. Os modelos propostos foram comparados com uma abordagem clássica de Algoritmo Genético (AG). Os resultados mostraram a viabilidade do uso das SNNs para agrupamento de vozes envelhecidas. / [en] The aging of the voice, known as presbyphonia, is a natural process that can cause great change in vocal quality of the individual. Its early identification can benefit, seeking treatments that could prevent their advance. This work is motivated by the identification of voices with signs of aging through neural networks of spiking type (SNN). The main objective is to build two new models, called hybrids, using SNN for clustering problems where the input attributes and parameters that configure the SNN are optimized by evolutionary algorithms. More specifically, the proposed neuro-evolutionary models are used in order to properly configure the SNN, and select the most relevant attributes for the formation of groups. Evolutionary algorithms used were the Evolutionary Algorithm with Quantum Inspiration with representation Binary-Real (AEIQ-BR) and the Optimization by Genetic Programming (OGP). The resulting models were named Quantum-Inspired Spiking Neural Evolution of Networks with Binary-Real (QbrSNN) and Spiking Neural Network Optimization by Genetic Programming (SNN-OGP). Eight bases were used, and two voice benchmark bases, male and female, in order to characterize aging. NNS for functional analysis, the tested benchmark base form with a classical clustering approach (kmeans) and a SNN without change. The proposed models were compared with a classical approach of Genetic Algorithm (GA). The results showed the feasibility of using the SNN to agrupamentode aged voices.
55

Models of EEG data mining and classification in temporal lobe epilepsy: wavelet-chaos-neural network methodology and spiking neural networks

Ghosh Dastidar, Samanwoy 22 June 2007 (has links)
No description available.
56

Implementación en hardware de sistemas de alta fiabilidad basados en metodologías estocásticas

Canals Guinand, Vicente José 27 July 2012 (has links)
La sociedad actual demanda cada vez más aplicaciones computacionalmente exigentes y que se implementen de forma energéticamente eficiente. Esto obliga a la industria del semiconductor a mantener una continua progresión de la tecnología CMOS. No obstante, los expertos vaticinan que el fin de la era de la progresión de la tecnología CMOS se acerca, puesto que se prevé que alrededor del 2020 la tecnología CMOS llegue a su límite. Cuando ésta llegue al punto conocido como “Red Brick Wall”, las limitaciones físicas, tecnológicas y económicas no harán viable el proseguir por esta senda. Todo ello ha motivado que a lo largo de la última década tanto instituciones públicas como privadas apostasen por el desarrollo de soluciones tecnológicas alternativas como es el caso de la nanotecnología (nanotubos, nanohilos, tecnologías basadas en el grafeno, etc.). En esta tesis planteamos una solución alternativa para poder afrontar algunos de los problemas computacionalmente exigentes. Esta solución hace uso de la tecnología CMOS actual sustituyendo la forma de computación clásica desarrollada por Von Neumann por formas de computación no convencionales. Éste es el caso de las computaciones basadas en lógicas pulsantes y en especial la conocida como computación estocástica, la cual proporciona un aumento de la fiabilidad y del paralelismo en los sistemas digitales. En esta tesis se presenta el desarrollo y evaluación de todo un conjunto de bloques computacionales estocásticos implementados mediante elementos digitales clásicos. A partir de estos bloques se proponen diversas metodologías computacionalmente eficientes que mediante su uso permiten afrontar algunos problemas de computación masiva de forma mucho más eficiente. En especial se ha centrado el estudio en los problemas relacionados con el campo del reconocimiento de patrones. / Today's society demands the use of applications with a high computational complexity that must be executed in an energy-efficient way. Therefore the semiconductor industry is forced to maintain the CMOS technology progression. However, experts predict that the end of the age of CMOS technology progression is approaching. It is expected that at 2020 CMOS technology would reach the point known as "Red Brick Wall" at which the physical, technological and economic limitations of CMOS technology will be unavoidable. All of this has caused that over the last decade public and private institutions has bet by the development of alternative technological solutions as is the case of nanotechnology (nanotubes, nanowires, graphene, etc.). In this thesis we propose an alternative solution to address some of the computationally exigent problems by using the current CMOS technology but replacing the classical computing way developed by Von Neumann by other forms of unconventional computing. This is the case of computing based on pulsed logic and especially the stochastic computing that provide a significant increase of the parallelism and the reliability of the systems. This thesis presents the development and evaluation of different stochastic computing methodologies implemented by digital gates. The different methods proposed are able to face some massive computing problems more efficiently than classical digital electronics. This is the case of those fields related to pattern recognition, which is the field we have focused the main part of the research work developed in this thesis.
57

Silicon neural networks : implementation of cortical cells to improve the artificial-biological hybrid technique

Grassia, Filippo 07 January 2013 (has links) (PDF)
This work has been supported by the European FACETS-ITN project. Within the frameworkof this project, we contribute to the simulation of cortical cell types (employingexperimental electrophysiological data of these cells as references), using a specific VLSIneural circuit to simulate, at the single cell level, the models studied as references in theFACETS project. The real-time intrinsic properties of the neuromorphic circuits, whichprecisely compute neuron conductance-based models, will allow a systematic and detailedexploration of the models, while the physical and analog aspect of the simulations, as opposedthe software simulation aspect, will provide inputs for the development of the neuralhardware at the network level. The second goal of this thesis is to contribute to the designof a mixed hardware-software platform (PAX), specifically designed to simulate spikingneural networks. The tasks performed during this thesis project included: 1) the methodsused to obtain the appropriate parameter sets of the cortical neuron models that can beimplemented in our analog neuromimetic chip (the parameter extraction steps was validatedusing a bifurcation analysis that shows that the simplified HH model implementedin our silicon neuron shares the dynamics of the HH model); 2) the fully customizablefitting method, in voltage-clamp mode, to tune our neuromimetic integrated circuits usinga metaheuristic algorithm; 3) the contribution to the development of the PAX systemin terms of software tools and a VHDL driver interface for neuron configuration in theplatform. Finally, it also addresses the issue of synaptic tuning for future SNN simulation.
58

Technologies émergentes de mémoire résistive pour les systèmes et application neuromorphique / Emerging Resistive Memory Technology for Neuromorphic Systems and Applications

Suri, Manan 18 September 2013 (has links)
La recherche dans le domaine de l’informatique neuro-inspirée suscite beaucoup d'intérêt depuis quelques années. Avec des applications potentielles dans des domaines tels que le traitement de données à grande échelle, la robotique ou encore les systèmes autonomes intelligents pour ne citer qu'eux, des paradigmes de calcul bio-inspirés sont étudies pour la prochaine génération solutions informatiques (post-Moore, non-Von Neumann) ultra-basse consommation. Dans ce travail, nous discutons les rôles que les différentes technologies de mémoire résistive non-volatiles émergentes (RRAM), notamment (i) Phase Change Memory (PCM), (ii) Conductive-Bridge Memory (CBRAM) et de la mémoire basée sur une structure Metal-Oxide (OXRAM) peuvent jouer dans des dispositifs neuromorphiques dédies. Nous nous concentrons sur l'émulation des effets de plasticité synaptique comme la potentialisation à long terme (Long Term Potentiation, LTP), la dépression à long terme (Long Term Depression, LTD) et la théorie STDP (Spike-Timing Dependent Plasticity) avec des synapses RRAM. Nous avons développé à la fois de nouvelles architectures de faiblement énergivore, des méthodologies de programmation ainsi que des règles d’apprentissages simplifiées inspirées de la théorie STDP spécifiquement optimisées pour certaines technologies RRAM. Nous montrons l’implémentation de systèmes neuromorphiques a grande échelle et efficace énergétiquement selon deux approches différentes: (i) des synapses multi-niveaux déterministes et (ii) des synapses stochastiques binaires. Des prototypes d'applications telles que l’extraction de schéma visuel et auditif complexe sont également montres en utilisant des réseaux de neurones impulsionnels (Feed-forward Spiking Neural Network, SNN). Nous introduisons également une nouvelle méthodologie pour concevoir des neurones stochastiques très compacts qui exploitent les caractéristiques physiques intrinsèques des appareils CBRAM. / Research in the field of neuromorphic- and cognitive- computing has generated a lot of interest in recent years. With potential application in fields such as large-scale data driven computing, robotics, intelligent autonomous systems to name a few, bio-inspired computing paradigms are being investigated as the next generation (post-Moore, non-Von Neumann) ultra-low power computing solutions. In this work we discuss the role that different emerging non-volatile resistive memory technologies (RRAM), specifically (i) Phase Change Memory (PCM), (ii) Conductive-Bridge Memory (CBRAM) and Metal-Oxide based Memory (OXRAM) can play in dedicated neuromorphic hardware. We focus on the emulation of synaptic plasticity effects such as long-term potentiation (LTP), long term depression (LTD) and spike-timing dependent plasticity (STDP) with RRAM synapses. We developed novel low-power architectures, programming methodologies, and simplified STDP-like learning rules, optimized specifically for some RRAM technologies. We show the implementation of large-scale energy efficient neuromorphic systems with two different approaches (i) deterministic multi-level synapses and (ii) stochastic-binary synapses. Prototype applications such as complex visual- and auditory- pattern extraction are also shown using feed-forward spiking neural networks (SNN). We also introduce a novel methodology to design low-area efficient stochastic neurons that exploit intrinsic physical effects of CBRAM devices.
59

Applications of the Fokker-Planck Equation in Computational and Cognitive Neuroscience

Vellmer, Sebastian 20 July 2020 (has links)
In dieser Arbeit werden mithilfe der Fokker-Planck-Gleichung die Statistiken, vor allem die Leistungsspektren, von Punktprozessen berechnet, die von mehrdimensionalen Integratorneuronen [Engl. integrate-and-fire (IF) neuron], Netzwerken von IF Neuronen und Entscheidungsfindungsmodellen erzeugt werden. Im Gehirn werden Informationen durch Pulszüge von Aktionspotentialen kodiert. IF Neurone mit radikal vereinfachter Erzeugung von Aktionspotentialen haben sich in Studien die auf Pulszeiten fokussiert sind als Standardmodelle etabliert. Eindimensionale IF Modelle können jedoch beobachtetes Pulsverhalten oft nicht beschreiben und müssen dazu erweitert werden. Im erste Teil dieser Arbeit wird eine Theorie zur Berechnung der Pulszugleistungsspektren von stochastischen, multidimensionalen IF Neuronen entwickelt. Ausgehend von der zugehörigen Fokker-Planck-Gleichung werden partiellen Differentialgleichung abgeleitet, deren Lösung sowohl die stationäre Wahrscheinlichkeitsverteilung und Feuerrate, als auch das Pulszugleistungsspektrum beschreibt. Im zweiten Teil wird eine Theorie für große, spärlich verbundene und homogene Netzwerke aus IF Neuronen entwickelt, in der berücksichtigt wird, dass die zeitlichen Korrelationen von Pulszügen selbstkonsistent sind. Neuronale Eingangströme werden durch farbiges Gaußsches Rauschen modelliert, das von einem mehrdimensionalen Ornstein-Uhlenbeck Prozess (OUP) erzeugt wird. Die Koeffizienten des OUP sind vorerst unbekannt und sind als Lösung der Theorie definiert. Um heterogene Netzwerke zu untersuchen, wird eine iterative Methode erweitert. Im dritten Teil wird die Fokker-Planck-Gleichung auf Binärentscheidungen von Diffusionsentscheidungsmodellen [Engl. diffusion-decision models (DDM)] angewendet. Explizite Gleichungen für die Entscheidungszugstatistiken werden für den einfachsten und analytisch lösbaren Fall von der Fokker-Planck-Gleichung hergeleitet. Für nichtliniear Modelle wird die Schwellwertintegrationsmethode erweitert. / This thesis is concerned with the calculation of statistics, in particular the power spectra, of point processes generated by stochastic multidimensional integrate-and-fire (IF) neurons, networks of IF neurons and decision-making models from the corresponding Fokker-Planck equations. In the brain, information is encoded by sequences of action potentials. In studies that focus on spike timing, IF neurons that drastically simplify the spike generation have become the standard model. One-dimensional IF neurons do not suffice to accurately model neural dynamics, however, the extension towards multiple dimensions yields realistic behavior at the price of growing complexity. The first part of this work develops a theory of spike-train power spectra for stochastic, multidimensional IF neurons. From the Fokker-Planck equation, a set of partial differential equations is derived that describes the stationary probability density, the firing rate and the spike-train power spectrum. In the second part of this work, a mean-field theory of large and sparsely connected homogeneous networks of spiking neurons is developed that takes into account the self-consistent temporal correlations of spike trains. Neural input is approximated by colored Gaussian noise generated by a multidimensional Ornstein-Uhlenbeck process of which the coefficients are initially unknown but determined by the self-consistency condition and define the solution of the theory. To explore heterogeneous networks, an iterative scheme is extended to determine the distribution of spectra. In the third part, the Fokker-Planck equation is applied to calculate the statistics of sequences of binary decisions from diffusion-decision models (DDM). For the analytically tractable DDM, the statistics are calculated from the corresponding Fokker-Planck equation. To determine the statistics for nonlinear models, the threshold-integration method is generalized.
60

Analysing the Energy Efficiency of Training Spiking Neural Networks / Analysering av Energieffektiviteten för Träning av Spikande Neuronnät

Liu, Richard, Bixo, Fredrik January 2022 (has links)
Neural networks have become increasingly adopted in society over the last few years. As neural networks consume a lot of energy to train, reducing the energy consumption of these networks is desirable from an environmental perspective. Spiking neural network is a type of neural network inspired by the human brain which is significantly more energy efficient than traditional neural networks. However, there is little research about how the hyper parameters of these networks affect the relationship between accuracy and energy. The aim of this report is therefore to analyse this relationship. To do this, we measure the energy usage of training several different spiking network models. The results of this study shows that the choice of hyper-parameters in a neural network does affect the efficiency of the network. While correlation between any individual factors and energy consumption is inconclusive, this work could be used as a springboard for further research in this area. / Under de senaste åren har neuronnät blivit allt vanligare i samhället. Eftersom neuronnät förbrukar mycket energi för att träna dem är det önskvärt ur miljösynpunkt att minska energiförbrukningen för dessa nätverk. Spikande neuronnät är en typ av neuronnät inspirerade av den mänskliga hjärnan som är betydligt mer energieffektivt än traditionella neuronnät. Det finns dock lite forskning om hur hyperparametrarna i dessa nätverk påverkar sambandet mellan noggrannhet och energi. Syftet med denna rapport är därför att analysera detta samband. För att göra detta mäter vi energiförbrukningen vid träning av flera olika modeller av spikande neuronnät-modeller. Resultaten av denna studie visar att valet av hyperparametrar i ett neuronnät påverkar nätverkets effektivitet. Även om korrelationen mellan enskilda faktorer och energiförbrukning inte är entydig kan detta arbete användas som en startpunkt för ytterligare forskning inom detta område.

Page generated in 0.1575 seconds