Spelling suggestions: "subject:"spike"" "subject:"epike""
101 |
New strategies of acquisition and processing of encephalographic biopotentialsNonclercq, Antoine 04 June 2007 (has links) (PDF)
Electroencephalography is a medical diagnosis technique. It consists in measuring the biopotentials produced by the upper layers of the brain at various standardized places on the skull.<p><p>Since the biopotentials produced by the upper parts of the brain have an amplitude of about one microvolt, the measurements performed by an EEG are exposed to many risks.<p><p>Moreover, since the present tendency is measure those signals over periods of several hours, or even several days, human analysis of the recording becomes extremely long and difficult. The use of signal analysis techniques for the help of paroxysm detection with clinical interest within the electroencephalogram becomes therefore almost essential. However the performance of many automatic detection algorithms becomes significantly degraded by the presence of interference: the quality of the recordings is therefore fundamental. <p><p>This thesis explores the benefits that electronics and signal processing could bring to electroencephalography, aiming at improving the signal quality and semi-automating the data processing.<p><p>These two aspects are interdependent because the performance of any semi-automation of the data processing depends on the quality of the acquired signal. Special attention is focused on the interaction between these two goals and attaining the optimal hardware/software pair. <p><p>This thesis offers an overview of the medical electroencephalographic acquisition chain and also of its possible improvements.<p> <p>The conclusions of this work may be extended to some other cases of biological signal amplification such as the electrocardiogram (ECG) and the electromyogram (EMG). Moreover, such a generalization would be easier, because their signals have a wider amplitude and are therefore more resistant toward interference.<p> / Doctorat en sciences appliquées / info:eu-repo/semantics/nonPublished
|
102 |
Electrophysiological Signatures of Active VisionCarl, Christine 29 April 2014 (has links)
Active movements are a key feature of human behavior. Even when we do not move our limbs we almost never stop guiding our eyes. As a minimal but omnipresent form of behavior, fast eye movements, called saccades, sample the visual world and determine to a large extent what we perceive. Despite being an integral part of visual perception, prevalent research practice treats the human subject as a passive observer who fixates a spot on the screen and is not allowed to move. Yet, learning sensorimotor interactions by active exploration in order to predict future changes and guide actions seems to be a fundamental principle of neural organization. This results in neural patterns of active behavior that can be fundamentally different from the neural processes revealed in movement-restricted laboratory settings questioning the transferability of results from experimental paradigms demanding fixation to real-world free viewing behavior. In this thesis, we aim at studying the neural mechanisms underlying free viewing behavior. In order to assess the fast, flexible and possibly distributed neural dynamics of active vision, we established a procedure for studying eye movements in magnetoencephalography (MEG) and investigated oscillatory signatures associated with sensorimotor processes of eye movements and saccade target selection, two fundamental processes of active vision.
Electroencephalography (EEG) and MEG can non-invasively measure fast neural dynamics and hence seem ideally suited for studying active vision in humans. However, artifacts related to eye movements confound both EEG and MEG signals, and a thorough handling of these artifacts is crucial for investigating neural activities during active movements. Mostly, cleaning of ocular artifacts has been performed for occasional eye movements and blinks in fixation paradigms in EEG. Less is known about the impact of ocular artifacts and especially the saccadic spike on MEG. As a first step to enable active vision studies in MEG, we investigated ocular artifacts and possible ways of their separation from neural signals in MEG. We show that the saccadic spike seriously distorts the spatial and spectral characteristics of the MEG signal (Study 2). We further tested if electrooculogram (EOG) based regression is feasible for corneo-retinal artifact removal (Study 1). Due to an often-raised concern, we addressed if EOG regression eliminates neural activity when applied for MEG. Our results do not indicate such susceptibility and we conclude that EOG regression for removing the corneo-retinal artifact in MEG is suitable. Based on insights from both studies, we established an artifact handling procedure including EOG regression and independent component analysis (ICA) to assess the neural dynamics of active vision.
In Study 3, we investigated spectral signatures of neuronal activity across cortex underlying saccade preparation, execution and re-fixation in a delayed saccade task. During preparation and execution, we found a dichotomic signature of gamma power increases and beta power decreases in widespread cortical areas related to saccadic control, including fronto-parietal structures. Saccade direction specific signatures resided in hemisphere lateralized changes in low gamma and alpha power in posterior parietal cortex during preparation extending to extrastriate areas during re-fixation.
Real-world behavior implies the constant need to flexibly select actions between competing behavioral alternatives depending on both sensory input and internal states. In order to assess internally motivated viewing behavior, we compared neuronal activity of externally cued saccades with saccades to freely chosen, equally valuable targets. We found gamma band specific power increases in fronto-parietal areas that are likely to reflect a fast transient process of action guidance for sensory-guided saccades and a sustained process for internally selecting between competing behavioral alternatives. The sustained signature of internal action selection suggests that a decision between spatially oriented movements is mediated within sensorimotor structures by neural competition between assemblies encoding parallel evolving movement plans. Since our observations support the assumption that a decision emerges through the distributed consensus of neural activities within effector specific areas rather than in a distinct decision module, they argue for the importance of studying mental processes within their ecologically valid and active context.
This thesis shows the feasibility of studying neural mechanisms of active vision in MEG and provides important steps for studying neurophysiological correlates of free viewing in the future. The observed spectrally specific, distributed signatures highlight the importance of assessing fast oscillatory dynamics across the cortex for understanding neural mechanisms mediating real-world active behavior.
|
103 |
Self-organized Criticality in Neural Networks by Inhibitory and Excitatory Synaptic PlasticityEhsani, Masud 25 January 2022 (has links)
Neural networks show intrinsic ongoing activity even in the absence of information processing and task-driven activities. This spontaneous activity has been reported to have specific characteristics ranging from scale-free avalanches in microcircuits to the power-law decay of the power spectrum of oscillations in coarse-grained recordings of large populations of neurons. The emergence of scale-free activity and power-law distributions of observables has encouraged researchers to postulate that the neural system is operating near a continuous phase transition. At such a phase transition, changes in control parameters or the strength of the external input lead to a change in the macroscopic behavior of the system. On the other hand, at a critical point due to critical slowing down, the phenomenological mesoscopic modeling of the system becomes realizable. Two distinct types of phase transitions have been suggested as the operating point of the neural system, namely active-inactive and synchronous-asynchronous phase transitions.
In contrast to normal phase transitions in which a fine-tuning of the control parameter(s) is required to bring the system to the critical point, neural systems should be supplemented with self-tuning mechanisms that adaptively adjust the system near to the critical point (or critical region) in the phase space.
In this work, we introduce a self-organized critical model of the neural network. We consider dynamics of excitatory and inhibitory (EI) sparsely connected populations of spiking leaky integrate neurons with conductance-based synapses. Ignoring inhomogeneities and internal fluctuations, we first analyze the mean-field model. We choose the strength of the external excitatory input and the average strength of excitatory to excitatory synapses as control parameters of the model and analyze the bifurcation diagram of the mean-field equations. We focus on bifurcations at the low firing rate regime in which the quiescent state loses stability due to Saddle-node or Hopf bifurcations. In particular, at the Bogdanov-Takens (BT) bifurcation point which is the intersection of the Hopf bifurcation and Saddle-node bifurcation lines of the 2D dynamical system, the network shows avalanche dynamics with power-law avalanche size and duration distributions. This matches the characteristics of low firing spontaneous activity in the cortex. By linearizing gain functions and excitatory and inhibitory nullclines, we can approximate the location of the BT bifurcation point. This point in the control parameter phase space corresponds to the internal balance of excitation and inhibition and a slight excess of external excitatory input to the excitatory population. Due to the tight balance of average excitation and inhibition currents, the firing of the individual cells is fluctuation-driven. Around the BT point, the spiking of neurons is a Poisson process and the population average membrane potential of neurons is approximately at the middle of the operating interval $[V_{Rest}, V_{th}]$. Moreover, the EI network is close to both oscillatory and active-inactive phase transition regimes.
Next, we consider self-tuning of the system at this critical point. The self-organizing parameter in our network is the balance of opposing forces of inhibitory and excitatory populations' activities and the self-organizing mechanisms are long-term synaptic plasticity and short-term depression of the synapses. The former tunes the overall strength of excitatory and inhibitory pathways to be close to a balanced regime of these currents and the latter which is based on the finite amount of resources in brain areas, act as an adaptive mechanism that tunes micro populations of neurons subjected to fluctuating external inputs to attain the balance in a wider range of external input strengths.
Using the Poisson firing assumption, we propose a microscopic Markovian model which captures the internal fluctuations in the network due to the finite size and matches the macroscopic mean-field equation by coarse-graining. Near the critical point, a phenomenological mesoscopic model for excitatory and inhibitory fields of activity is possible due to the time scale separation of slowly changing variables and fast degrees of freedom. We will show that the mesoscopic model corresponding to the neural field model near the local Bogdanov-Takens bifurcation point matches Langevin's description of the directed percolation process. Tuning the system at the critical point can be achieved by coupling fast population dynamics with slow adaptive gain and synaptic weight dynamics, which make the system wander around the phase transition point. Therefore, by introducing short-term and long-term synaptic plasticity, we have proposed a self-organized critical stochastic neural field model.:1. Introduction
1.1. Scale-free Spontaneous Activity
1.1.1. Nested Oscillations in the Macro-scale Collective Activity
1.1.2. Up and Down States Transitions
1.1.3. Avalanches in Local Neuronal Populations
1.2. Criticality and Self-organized Criticality in Systems out of Equilibrium
1.2.1. Sandpile Models
1.2.2. Directed Percolation
1.3. Critical Neural Models
1.3.1. Self-Organizing Neural Automata
1.3.2. Criticality in the Mesoscopic Models of Cortical Activity
1.4. Balance of Inhibition and Excitation
1.5. Functional Benefits of Being in the Critical State
1.6. Arguments Against the Critical State of the Brain
1.7. Organization of the Current Work
2. Single Neuron Model
2.1. Impulse Response of the Neuron
2.2. Response of the Neuron to the Constant Input
2.3. Response of the Neuron to the Poisson Input
2.3.1. Potential Distribution of a Neuron Receiving Poisson Input
2.3.2. Firing Rate and Interspike intervals’ CV Near the Threshold
2.3.3. Linear Poisson Neuron Approximation
3. Interconnected Homogeneous Population of Excitatory and Inhibitory Neurons
3.1. Linearized Nullclines and Different Dynamic Regimes
3.2. Logistic Function Approximation of Gain Functions
3.3. Dynamics Near the BT Bifurcation Point
3.4. Avalanches in the Region Close to the BT Point
3.5. Stability Analysis of the Fixed Points in the Linear Regime
3.6. Characteristics of Avalanches
4. Long Term and Short Term Synaptic Plasticity rules Tune the EI Population Close to the BT Bifurcation Point
4.1. Long Term Synaptic Plasticity by STDP Tunes Synaptic Weights Close to the Balanced State
4.2. Short-term plasticity and Up-Down states transition
5. Interconnected network of EI populations: Wilson-Cowan Neural Field Model
6. Stochastic Neural Field
6.1. Finite size fluctuations in a single EI population
6.2. Stochastic Neural Field with a Tuning Mechanism to the
Critical State
7. Conclusion
|
104 |
Seasonal Changes in Body Composition, Block Jump, Attack Jump and Lower Body Power Index in Male Collegiate Volleyball PlayersLoomis, Geoffrey W 01 December 2013 (has links) (PDF)
Jumping ability in volleyball players is crucial to a team's success. There are both muscular and neural components in jumping. Coaches often test jumping ability and body composition prior to the start of the competitive season, but many fail to monitor these important variables during the course of the season. Jumping ability can decrease over the course of the season as the focus moves from strength training in the weight room to skill development on the court. It is imperative that players maintain their jumping ability and body composition over the course of the season. Seasonal changes in elite-male volleyball players were determined by testing the players body composition, spike jump, block jump and lower body power index at three distinct time points: prior to the first game, during their bye-week, and at the end of their regular season. It was found that these players were able to maintain their vertical jump and lower body power index. Also, those who were deemed players (those who played throughout the course of the season) had lower body fat percentages and higher jump scores. These results will aid coaches in understanding the changes that occur over the course of the season in elite-male collegiate volleyball players.
|
105 |
MICROPROCESSOR-COMPATIBLE NEURAL SIGNAL PROCESSING FOR AN IMPLANTABLE NEURODYNAMIC SENSORHsu, Ming-Hsuan January 2009 (has links)
No description available.
|
106 |
A Battery-Powered Multichannel Microsystem for Activity-Dependent Intracortical MicrostimulationAzin, Meysam 29 March 2011 (has links)
No description available.
|
107 |
Application and Simulation of Neuromorphic Devices for use in Neural NetworksWenke, Sam 28 September 2018 (has links)
No description available.
|
108 |
Spike Processing Circuit Design for Neuromorphic ComputingZhao, Chenyuan 13 September 2019 (has links)
Von Neumann Bottleneck, which refers to the limited throughput between the CPU and memory, has already become the major factor hindering the technical advances of computing systems. In recent years, neuromorphic systems started to gain increasing attention as compact and energy-efficient computing platforms. Spike based-neuromorphic computing systems require high performance and low power neural encoder and decoder to emulate the spiking behavior of neurons. These two spike-analog signals converting interface determine the whole spiking neuromorphic computing system's performance, especially the highest performance. Many state-of-the-art neuromorphic systems typically operate in the frequency range between 〖10〗^0KHz and 〖10〗^2KHz due to the limitation of encoding/decoding speed. In this dissertation, all these popular encoding and decoding schemes, i.e. rate encoding, latency encoding, ISI encoding, together with related hardware implementations have been discussed and analyzed. The contributions included in this dissertation can be classified into three main parts: neuron improvement, three kinds of ISI encoder design, two types of ISI decoder design. Two-path leakage LIF neuron has been fabricated and modular design methodology is invented. Three kinds of ISI encoding schemes including parallel signal encoding, full signal iteration encoding, and partial signal encoding are discussed. The first two types ISI encoders have been fabricated successfully and the last ISI encoder will be taped out by the end of 2019. Two types of ISI decoders adopted different techniques which are sample-and-hold based mixed-signal design and spike-timing-dependent-plasticity (STDP) based analog design respectively. Both these two ISI encoders have been evaluated through post-layout simulations successfully. The STDP based ISI encoder will be taped out by the end of 2019. A test bench based on correlation inspection has been built to evaluate the information recovery capability of the proposed spiking processing link. / Doctor of Philosophy / Neuromorphic computing is a kind of specific electronic system that could mimic biological bodies’ behavior. In most cases, neuromorphic computing system is built with analog circuits which have benefits in power efficient and low thermal radiation. Among neuromorphic computing system, one of the most important components is the signal processing interface, i.e. encoder/decoder. To increase the whole system’s performance, novel encoders and decoders have been proposed in this dissertation. In this dissertation, three kinds of temporal encoders, one rate encoder, one latency encoder, one temporal decoder, and one general spike decoder have been proposed. These designs could be combined together to build high efficient spike-based data link which guarantee the processing performance of whole neuromorphic computing system.
|
109 |
Caracterización de medidas de regularidad en señales biomédicas. Robustez a outliersMolina Picó, Antonio 03 September 2014 (has links)
Los sistemas fisiológicos generan señales eléctricas durante su funcionamiento. Estas
señales pueden ser registradas y representadas, constituyendo un elemento
fundamental de ayuda al diagnóstico en la práctica clínica actual. Sin embargo,
la inspección visual no permite la extracción completa de la información contenida
en estas señales. Entre las técnicas de procesamiento automático, destacan los
métodos no lineales, específicamente aquellos relacionados con la estimación de la
regularidad de la señal subyacente. Estos métodos están ofreciendo en los ´últimos
años resultados muy significativos en este ´ámbito. Sin embargo, son muy sensibles
a las interferencias en las señales, ocurriendo una degradación significativa de su
capacidad diagnostica si las señales biomédicas están contaminadas. Uno de los
elementos que se presenta con cierta frecuencia en los registros fisiológicos y que
contribuye a esta degradación de prestaciones en estimadores no lineales, son los
impulsos de cortad duración, conocidos en este contexto como spikes.
En este trabajo se pretende abordar la problemática asociada a la presencia de
spikes en bioseñales, caracterizando su influencia en una serie de medidas concretas,
para que la posible degradación pueda ser anticipada y las contramedidas
pertinentes aplicadas. En concreto, las medidas de regularidad caracterizadas son:
Approximate Entropy (ApEn), Sample Entropy (SampEn), Lempel Ziv Complexity
(LZC) y Detrended Fluctuation Analysis (DFA). Todos estos métodos han
ofrecido resultados satisfactorios en multitud de estudios previos en el procesado
de señales biomédicas. La caracterización se lleva a cabo mediante un exhaustivo
estudio experimental en el cual se aplican spikes controlados a diferentes registros
fisiológicos, y se analiza cuantitativa y cualitativamente la influencia de dichos
spikes en la estimación resultante.
Los resultados demuestran que el nivel de interferencia, así como los parámetros de
las medidas de regularidad, afectan de forma muy variada. En general, LZC es la
medida más robusta del conjunto caracterizado frente a spikes, mientras que DFA
es la más vulnerable. Sin embargo, la capacidad de discernir entre clases permanece
en muchos casos, a pesar de los cambios producidos en los valores absolutos de
entropía. / Molina Picó, A. (2014). Caracterización de medidas de regularidad en señales biomédicas. Robustez a outliers [Tesis doctoral]. Editorial Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/39346
|
110 |
Analyse des trains de spike à large échelle avec contraintes spatio-temporelles : application aux acquisitions multi-électrodes rétiniennes / Analysis of large scale spiking networks dynamics with spatio-temporal constraints : application to multi-electrodes acquisitions in the retinaNasser, Hassan 14 March 2014 (has links)
L’évolution des techniques d’acquisition de l’activité neuronale permet désormais d'enregistrer simultanément jusqu’à plusieurs centaines de neurones dans le cortex ou dans la rétine. L’analyse de ces données nécessite des méthodes mathématiques et numériques pour décrire les corrélations spatiotemporelles de la population neuronale. Une méthode couramment employée est basée sur le principe d’entropie maximale. Dans ce cas, le produit N×R, où N est le nombre de neurones et R le temps maximal considéré dans les corrélations, est un paramètre crucial. Les méthodes de physique statistique usuelles sont limitées aux corrélations spatiales avec R = 1 (Ising) alors que les méthodes basées sur des matrices de transfert, permettant l’analyse des corrélations spatio-temporelles (R > 1), sont limitées à N×R≤20. Dans une première partie, nous proposons une version modifiée de la méthode de matrice de transfert, basée sur un algorithme de Monte-Carlo parallèle, qui nous permet d’aller jusqu’à N×R=100. Dans la deuxième partie, nous présentons la bibliothèque C++ Enas, dotée d’une interface graphique développée pour les neurobiologistes. Enas offre un environnement hautement interactif permettant aux utilisateurs de gérer les données, effectuer des analyses empiriques, interpoler des modèles statistiques et visualiser les résultats. Enfin, dans une troisième partie, nous testons notre méthode sur des données synthétiques et réelles (rétine, fournies par nos partenaires biologistes). Notre analyse non exhaustive montre l’avantage de considérer des corrélations spatio-temporelles pour l’analyse des données rétiniennes; mais elle montre aussi les limites des méthodes d’entropie maximale. / Recent experimental advances have made it possible to record up to several hundreds of neurons simultaneously in the cortex or in the retina. Analyzing such data requires mathematical and numerical methods to describe the spatio-temporal correlations in population activity. This can be done thanks to Maximum Entropy method. Here, a crucial parameter is the product N×R where N is the number of neurons and R the memory depth of correlations (how far in the past does the spike activity affects the current state). Standard statistical mechanics methods are limited to spatial correlation structure with R = 1 (e.g. Ising model) whereas methods based on transfer matrices, allowing the analysis of spatio-temporal correlations, are limited to NR ≤ 20. In the first part of the thesis we propose a modified version of the transfer matrix method, based on the parallel version of the Montecarlo algorithm, allowing us to go to NR=100. In a second part we present EnaS, a C++ library with a Graphical User Interface developed for neuroscientists. EnaS offers highly interactive tools that allow users to manage data, perform empirical statistics, modeling and visualizing results. Finally, in a third part, we test our method on synthetic and real data sets. Real data set correspond to retina data provided by our partners neuroscientists. Our non-extensive analysis shows the advantages of considering spatio-temporal correlations for the analysis of retina spike trains, but it also outlines the limits of Maximum Entropy methods.
|
Page generated in 0.091 seconds