• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 8
  • 1
  • Tagged with
  • 20
  • 20
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Neural Model of Call-counting in Anurans

Houtman, David B. 11 October 2012 (has links)
Temporal features in the vocalizations of animals and insects play an important role in a diverse range of species-specific activities such as mate selection, territoriality, and hunting. The neural mechanisms underlying the response to such stimuli remain largely unknown. Two species of anuran amphibian provide a starting point for the investigation of the neurological response to species-specific advertisement calls. Neurons in the anuran midbrain of Rana pipiens and Hyla regilla exhibit an atypical response when presented with a fixed number of advertisement calls. The general response to these calls is mostly inhibitory; only when the correct number of calls is presented at the correct repetition rate will this inhibition be overcome and the neurons reach a spiking threshold. In addition to rate-dependent call-counting, these neurons are sensitive to missed calls: a pause of sufficient duration—the equivalent of two missed calls—effectively resets a neuron to its initial condition. These neurons thus provide a model system for investigating the neural mechanisms underlying call-counting and interval specificity in audition. We present a minimal computational model in which competition between finely-tuned excitatory and inhibitory synaptic currents, combined with a small propagation delay between the two, broadly explains the three key features observed: rate dependence, call counting, and resetting. While limitations in the available data prevent the determination of a single set of parameters, a detailed analysis indicates that these parameters should fall within a certain range of values. Furthermore, while network effects are counter-indicated by the data, the model suggests that recruitment of neurons plays a necessary role in facilitating the excitatory response of counting neurons—although this hypothesis remains untested. Despite these limitations, the model sheds light on the mechanisms underlying the biophysics of counting, and thus provides insight into the neuroethology of amphibians in general.
2

A Neural Model of Call-counting in Anurans

Houtman, David B. 11 October 2012 (has links)
Temporal features in the vocalizations of animals and insects play an important role in a diverse range of species-specific activities such as mate selection, territoriality, and hunting. The neural mechanisms underlying the response to such stimuli remain largely unknown. Two species of anuran amphibian provide a starting point for the investigation of the neurological response to species-specific advertisement calls. Neurons in the anuran midbrain of Rana pipiens and Hyla regilla exhibit an atypical response when presented with a fixed number of advertisement calls. The general response to these calls is mostly inhibitory; only when the correct number of calls is presented at the correct repetition rate will this inhibition be overcome and the neurons reach a spiking threshold. In addition to rate-dependent call-counting, these neurons are sensitive to missed calls: a pause of sufficient duration—the equivalent of two missed calls—effectively resets a neuron to its initial condition. These neurons thus provide a model system for investigating the neural mechanisms underlying call-counting and interval specificity in audition. We present a minimal computational model in which competition between finely-tuned excitatory and inhibitory synaptic currents, combined with a small propagation delay between the two, broadly explains the three key features observed: rate dependence, call counting, and resetting. While limitations in the available data prevent the determination of a single set of parameters, a detailed analysis indicates that these parameters should fall within a certain range of values. Furthermore, while network effects are counter-indicated by the data, the model suggests that recruitment of neurons plays a necessary role in facilitating the excitatory response of counting neurons—although this hypothesis remains untested. Despite these limitations, the model sheds light on the mechanisms underlying the biophysics of counting, and thus provides insight into the neuroethology of amphibians in general.
3

A Neural Model of Call-counting in Anurans

Houtman, David B. January 2012 (has links)
Temporal features in the vocalizations of animals and insects play an important role in a diverse range of species-specific activities such as mate selection, territoriality, and hunting. The neural mechanisms underlying the response to such stimuli remain largely unknown. Two species of anuran amphibian provide a starting point for the investigation of the neurological response to species-specific advertisement calls. Neurons in the anuran midbrain of Rana pipiens and Hyla regilla exhibit an atypical response when presented with a fixed number of advertisement calls. The general response to these calls is mostly inhibitory; only when the correct number of calls is presented at the correct repetition rate will this inhibition be overcome and the neurons reach a spiking threshold. In addition to rate-dependent call-counting, these neurons are sensitive to missed calls: a pause of sufficient duration—the equivalent of two missed calls—effectively resets a neuron to its initial condition. These neurons thus provide a model system for investigating the neural mechanisms underlying call-counting and interval specificity in audition. We present a minimal computational model in which competition between finely-tuned excitatory and inhibitory synaptic currents, combined with a small propagation delay between the two, broadly explains the three key features observed: rate dependence, call counting, and resetting. While limitations in the available data prevent the determination of a single set of parameters, a detailed analysis indicates that these parameters should fall within a certain range of values. Furthermore, while network effects are counter-indicated by the data, the model suggests that recruitment of neurons plays a necessary role in facilitating the excitatory response of counting neurons—although this hypothesis remains untested. Despite these limitations, the model sheds light on the mechanisms underlying the biophysics of counting, and thus provides insight into the neuroethology of amphibians in general.
4

A Neurocomputational Model of Smooth Pursuit Control to Interact with the Real World

Sadat Rezai, Seyed Omid 24 January 2014 (has links)
Whether we want to drive a car, play a ball game, or even enjoy watching a flying bird, we need to track moving objects. This is possible via smooth pursuit eye movements (SPEMs), which maintain the image of the moving object on the fovea (i.e., a very small portion of the retina with high visual resolution). At first glance, performing an accurate SPEM by the brain may seem trivial. However, imperfect visual coding, processing and transmission delays, wide variety of object sizes, and background textures make the task challenging. Furthermore, the existence of distractors in the environment makes it even more complicated and it is no wonder why understanding SPEM has been a classic question of human motor control. To understand physiological systems of which SPEM is an example, creation of models has played an influential role. Models make quantitative predictions that can be tested in experiments. Therefore, modelling SPEM is not only valuable to learn neurobiological mechanisms of smooth pursuit or more generally gaze control but also beneficial to give insight into other sensory-motor functions. In this thesis, I present a neurocomputational SPEM model based on Neural Engineering Framework (NEF) to drive an eye-like robot. The model interacts with the real world in real time. It uses naturalistic images as input and by the use of spiking model neurons controls the robot. This work can be the first step towards more thorough validation of abstract SPEM control models. Besides, it is a small step toward neural models that drive robots to accomplish more intricate sensory-motor tasks such as reaching and grasping.
5

Um modelo neural de aprimoramento progressivo para redução de dimensionalidade / A Progressive Enhancement Neural Model for dimensionality reduction

Camargo, Sandro da Silva January 2010 (has links)
Nas últimas décadas, avanços em tecnologias de geração, coleta e armazenamento de dados têm contribuído para aumentar o tamanho dos bancos de dados nas diversas áreas de conhecimento humano. Este aumento verifica-se não somente em relação à quantidade de amostras de dados, mas principalmente em relação à quantidade de características descrevendo cada amostra. A adição de características causa acréscimo de dimensões no espaço matemático, conduzindo ao crescimento exponencial do hipervolume dos dados, problema denominado “maldição da dimensionalidade”. A maldição da dimensionalidade tem sido um problema rotineiro para cientistas que, a fim de compreender e explicar determinados fenômenos, têm se deparado com a necessidade de encontrar estruturas significativas ocultas, de baixa dimensão, dentro de dados de alta dimensão. Este processo denomina-se redução de dimensionalidade dos dados (RDD). Do ponto de vista computacional, a conseqüência natural da RDD é uma diminuição do espaço de busca de hipóteses, melhorando o desempenho e simplificando os resultados da modelagem de conhecimento em sistemas autônomos de aprendizado. Dentre as técnicas utilizadas atualmente em sistemas autônomos de aprendizado, as redes neurais artificiais (RNAs) têm se tornado particularmente atrativas para modelagem de sistemas complexos, principalmente quando a modelagem é difícil ou quando a dinâmica do sistema não permite o controle on-line. Apesar de serem uma poderosa técnica, as RNAs têm seu desempenho afetado pela maldição da dimensionalidade. Quando a dimensão do espaço de entradas é alta, as RNAs podem utilizar boa parte de seus recursos para representar porções irrelevantes do espaço de busca, dificultando o aprendizado. Embora as RNAs, assim como outras técnicas de aprendizado de máquina, consigam identificar características mais informativas para um processo de modelagem, a utilização de técnicas de RDD frequentemente melhora os resultados do processo de aprendizado. Este trabalho propõe um wrapper que implementa um modelo neural de aprimoramento progressivo para RDD em sistemas autônomos de aprendizado supervisionado visando otimizar o processo de modelagem. Para validar o modelo neural de aprimoramento progressivo, foram realizados experimentos com bancos de dados privados e de repositórios públicos de diferentes domínios de conhecimento. A capacidade de generalização dos modelos criados é avaliada por meio de técnicas de validação cruzada. Os resultados obtidos demonstram que o modelo neural de aprimoramento progressivo consegue identificar características mais informativas, permitindo a RDD, e tornando possível criar modelos mais simples e mais precisos. A implementação da abordagem e os experimentos foram realizados no ambiente Matlab, utilizando o toolbox de RNAs. / In recent decades, advances on data generation, collection and storing technologies have contributed to increase databases size in different knowledge areas. This increase is seen not only regarding samples amount, but mainly regarding dimensionality, i.e. the amount of features describing each sample. Features adding causes dimension increasing in mathematical space, leading to an exponential growth of data hypervolume. This problem is called “the curse of dimensionality”. The curse of dimensionality has been a routine problem for scientists, that in order to understand and explain some phenomena, have faced with the demand to find meaningful low dimensional structures hidden in high dimensional search spaces. This process is called data dimensionality reduction (DDR). From computational viewpoint, DDR natural consequence is a reduction of hypothesis search space, improving performance and simplifying the knowledge modeling results in autonomous learning systems. Among currently used techniques in autonomous learning systems, artificial neural networks (ANNs) have becoming particularly attractive to model complex systems, when modeling is hard or when system dynamics does not allow on-line control. Despite ANN being a powerful tool, their performance is affected by the curse of dimensionality. When input space dimension is high, ANNs can use a significant part of their resources to represent irrelevant parts of input space making learning process harder. Although ANNs, and other machine learning techniques, can identify more informative features for a modeling process, DDR techniques often improve learning results. This thesis proposes a wrapper which implements a Progressive Enhancement Neural Model to DDR in supervised autonomous learning systems in order to optimize the modeling process. To validate the proposed approach, experiments were performed with private and public databases, from different knowledge domains. The generalization ability of developed models is evaluated by means of cross validation techniques. Obtained results demonstrate that the proposed approach can identify more informative features, allowing DDR, and becoming possible to create simpler and more accurate models. The implementation of the proposed approach and related experiments were performed in Matlab Environment, using ANNs toolbox.
6

Um modelo neural de aprimoramento progressivo para redução de dimensionalidade / A Progressive Enhancement Neural Model for dimensionality reduction

Camargo, Sandro da Silva January 2010 (has links)
Nas últimas décadas, avanços em tecnologias de geração, coleta e armazenamento de dados têm contribuído para aumentar o tamanho dos bancos de dados nas diversas áreas de conhecimento humano. Este aumento verifica-se não somente em relação à quantidade de amostras de dados, mas principalmente em relação à quantidade de características descrevendo cada amostra. A adição de características causa acréscimo de dimensões no espaço matemático, conduzindo ao crescimento exponencial do hipervolume dos dados, problema denominado “maldição da dimensionalidade”. A maldição da dimensionalidade tem sido um problema rotineiro para cientistas que, a fim de compreender e explicar determinados fenômenos, têm se deparado com a necessidade de encontrar estruturas significativas ocultas, de baixa dimensão, dentro de dados de alta dimensão. Este processo denomina-se redução de dimensionalidade dos dados (RDD). Do ponto de vista computacional, a conseqüência natural da RDD é uma diminuição do espaço de busca de hipóteses, melhorando o desempenho e simplificando os resultados da modelagem de conhecimento em sistemas autônomos de aprendizado. Dentre as técnicas utilizadas atualmente em sistemas autônomos de aprendizado, as redes neurais artificiais (RNAs) têm se tornado particularmente atrativas para modelagem de sistemas complexos, principalmente quando a modelagem é difícil ou quando a dinâmica do sistema não permite o controle on-line. Apesar de serem uma poderosa técnica, as RNAs têm seu desempenho afetado pela maldição da dimensionalidade. Quando a dimensão do espaço de entradas é alta, as RNAs podem utilizar boa parte de seus recursos para representar porções irrelevantes do espaço de busca, dificultando o aprendizado. Embora as RNAs, assim como outras técnicas de aprendizado de máquina, consigam identificar características mais informativas para um processo de modelagem, a utilização de técnicas de RDD frequentemente melhora os resultados do processo de aprendizado. Este trabalho propõe um wrapper que implementa um modelo neural de aprimoramento progressivo para RDD em sistemas autônomos de aprendizado supervisionado visando otimizar o processo de modelagem. Para validar o modelo neural de aprimoramento progressivo, foram realizados experimentos com bancos de dados privados e de repositórios públicos de diferentes domínios de conhecimento. A capacidade de generalização dos modelos criados é avaliada por meio de técnicas de validação cruzada. Os resultados obtidos demonstram que o modelo neural de aprimoramento progressivo consegue identificar características mais informativas, permitindo a RDD, e tornando possível criar modelos mais simples e mais precisos. A implementação da abordagem e os experimentos foram realizados no ambiente Matlab, utilizando o toolbox de RNAs. / In recent decades, advances on data generation, collection and storing technologies have contributed to increase databases size in different knowledge areas. This increase is seen not only regarding samples amount, but mainly regarding dimensionality, i.e. the amount of features describing each sample. Features adding causes dimension increasing in mathematical space, leading to an exponential growth of data hypervolume. This problem is called “the curse of dimensionality”. The curse of dimensionality has been a routine problem for scientists, that in order to understand and explain some phenomena, have faced with the demand to find meaningful low dimensional structures hidden in high dimensional search spaces. This process is called data dimensionality reduction (DDR). From computational viewpoint, DDR natural consequence is a reduction of hypothesis search space, improving performance and simplifying the knowledge modeling results in autonomous learning systems. Among currently used techniques in autonomous learning systems, artificial neural networks (ANNs) have becoming particularly attractive to model complex systems, when modeling is hard or when system dynamics does not allow on-line control. Despite ANN being a powerful tool, their performance is affected by the curse of dimensionality. When input space dimension is high, ANNs can use a significant part of their resources to represent irrelevant parts of input space making learning process harder. Although ANNs, and other machine learning techniques, can identify more informative features for a modeling process, DDR techniques often improve learning results. This thesis proposes a wrapper which implements a Progressive Enhancement Neural Model to DDR in supervised autonomous learning systems in order to optimize the modeling process. To validate the proposed approach, experiments were performed with private and public databases, from different knowledge domains. The generalization ability of developed models is evaluated by means of cross validation techniques. Obtained results demonstrate that the proposed approach can identify more informative features, allowing DDR, and becoming possible to create simpler and more accurate models. The implementation of the proposed approach and related experiments were performed in Matlab Environment, using ANNs toolbox.
7

Um modelo neural de aprimoramento progressivo para redução de dimensionalidade / A Progressive Enhancement Neural Model for dimensionality reduction

Camargo, Sandro da Silva January 2010 (has links)
Nas últimas décadas, avanços em tecnologias de geração, coleta e armazenamento de dados têm contribuído para aumentar o tamanho dos bancos de dados nas diversas áreas de conhecimento humano. Este aumento verifica-se não somente em relação à quantidade de amostras de dados, mas principalmente em relação à quantidade de características descrevendo cada amostra. A adição de características causa acréscimo de dimensões no espaço matemático, conduzindo ao crescimento exponencial do hipervolume dos dados, problema denominado “maldição da dimensionalidade”. A maldição da dimensionalidade tem sido um problema rotineiro para cientistas que, a fim de compreender e explicar determinados fenômenos, têm se deparado com a necessidade de encontrar estruturas significativas ocultas, de baixa dimensão, dentro de dados de alta dimensão. Este processo denomina-se redução de dimensionalidade dos dados (RDD). Do ponto de vista computacional, a conseqüência natural da RDD é uma diminuição do espaço de busca de hipóteses, melhorando o desempenho e simplificando os resultados da modelagem de conhecimento em sistemas autônomos de aprendizado. Dentre as técnicas utilizadas atualmente em sistemas autônomos de aprendizado, as redes neurais artificiais (RNAs) têm se tornado particularmente atrativas para modelagem de sistemas complexos, principalmente quando a modelagem é difícil ou quando a dinâmica do sistema não permite o controle on-line. Apesar de serem uma poderosa técnica, as RNAs têm seu desempenho afetado pela maldição da dimensionalidade. Quando a dimensão do espaço de entradas é alta, as RNAs podem utilizar boa parte de seus recursos para representar porções irrelevantes do espaço de busca, dificultando o aprendizado. Embora as RNAs, assim como outras técnicas de aprendizado de máquina, consigam identificar características mais informativas para um processo de modelagem, a utilização de técnicas de RDD frequentemente melhora os resultados do processo de aprendizado. Este trabalho propõe um wrapper que implementa um modelo neural de aprimoramento progressivo para RDD em sistemas autônomos de aprendizado supervisionado visando otimizar o processo de modelagem. Para validar o modelo neural de aprimoramento progressivo, foram realizados experimentos com bancos de dados privados e de repositórios públicos de diferentes domínios de conhecimento. A capacidade de generalização dos modelos criados é avaliada por meio de técnicas de validação cruzada. Os resultados obtidos demonstram que o modelo neural de aprimoramento progressivo consegue identificar características mais informativas, permitindo a RDD, e tornando possível criar modelos mais simples e mais precisos. A implementação da abordagem e os experimentos foram realizados no ambiente Matlab, utilizando o toolbox de RNAs. / In recent decades, advances on data generation, collection and storing technologies have contributed to increase databases size in different knowledge areas. This increase is seen not only regarding samples amount, but mainly regarding dimensionality, i.e. the amount of features describing each sample. Features adding causes dimension increasing in mathematical space, leading to an exponential growth of data hypervolume. This problem is called “the curse of dimensionality”. The curse of dimensionality has been a routine problem for scientists, that in order to understand and explain some phenomena, have faced with the demand to find meaningful low dimensional structures hidden in high dimensional search spaces. This process is called data dimensionality reduction (DDR). From computational viewpoint, DDR natural consequence is a reduction of hypothesis search space, improving performance and simplifying the knowledge modeling results in autonomous learning systems. Among currently used techniques in autonomous learning systems, artificial neural networks (ANNs) have becoming particularly attractive to model complex systems, when modeling is hard or when system dynamics does not allow on-line control. Despite ANN being a powerful tool, their performance is affected by the curse of dimensionality. When input space dimension is high, ANNs can use a significant part of their resources to represent irrelevant parts of input space making learning process harder. Although ANNs, and other machine learning techniques, can identify more informative features for a modeling process, DDR techniques often improve learning results. This thesis proposes a wrapper which implements a Progressive Enhancement Neural Model to DDR in supervised autonomous learning systems in order to optimize the modeling process. To validate the proposed approach, experiments were performed with private and public databases, from different knowledge domains. The generalization ability of developed models is evaluated by means of cross validation techniques. Obtained results demonstrate that the proposed approach can identify more informative features, allowing DDR, and becoming possible to create simpler and more accurate models. The implementation of the proposed approach and related experiments were performed in Matlab Environment, using ANNs toolbox.
8

Síntese,modelagem e simulação de estruturas neurais morfologicamente realísticas. / Synthesis, Modeling and Simulation of morphologically realistic neural simulation.

Coelho, Regina Célia 25 September 1998 (has links)
Os aspectos morfológicos dos neurônios e estruturas neurais, embora potencialmente importantes, têm recebido relativamente pouca atenção na literatura em neurociência. Este trabalho consiste numa substancial parte de um projeto em desenvolvimento no Grupo de Pesquisa em Visão Cibernética voltado para o estudo da relação formal/função neural. Mais especificamente, o presente trabalho dedica particular atenção para a síntese, modelagem e simulação de estruturas neurais morfologicamente realísticas. A tese se inicia com revisões bibliográficas sobre visão biológica e neurociência, direcionadas aos assuntos a serem aqui abordados. Começamos a descrição dos desenvolvimentos com um levantamento, avaliação e proposta de medidas neuromorfométricas adequadas para exprimir as propriedades mais representativas para nosso trabalho, tais como cobertura espacial, complexidade e decaimento eletrônico. Incluímos nessa parte a metodologia utilizada para a geração de neurônios artificiais bidimensionais estatisticamente semelhantes aos naturais. Apresenta-se também a extensão desta metodologia para o caso tridimensional, validada pela análise neuroinorfométrica dos neurônios gerados. Na seqüência, descrevemos o processo de geração de estruturas neurais compostas de neurônios. Considerando modelos com uma camada neural para a codificação de especificidade de orientação, mas sem levar em conta a forma neural, vários casos são simulados, utilizando gradientes na distribuição dos pesos sinápticos e distribuições regulares ou aleatórias (uniformes) dos neurônios na estrutura. A extensão dessas simulações utilizando estruturas que consideram mais detalhadamente a forma neural, usando agora neurônios artificiais gerados pelo método descrito nesta monografia, é apresentada na seqüência. Entre outros efeitos, mostramos que a extensão da arborização dendrítica é um fator determinante da taxa de convergência e seletividade nos modelos, e que gradientes na extensão das arborizações sinápticas são essenciais para a adequada codificação de orientações em módulos cêntricos contendo somatas aleatoriamente distribuídos. / The morphological aspects of neurons and neural structures, although potentially important, have received relatively little attention in the literature in neuroscience. This work consists in a substantial part of a project in development at the Cybernetic Vision Research Group, directed to the study of the form/function relationship. More specifically, the present work dedicates particular attention to the synthesis, modeling, and simulation of morphologically realistic neural structures. The thesis begins with a bibliographic review about biological vision and neuroscience, focusing on the subjects to be here considered. We start the description of the developments with the revision; evaluation and proposal of neuromorphometric measures adequate express the properties more representative to the work, such as spatial cover, complexity and electrotonic decay. We include in this part the methodology used for the generation of bidimensional artificial neurons statistically similar to natural ones. The extension of these developments to the tridimensional case, including their respective validation (performed in terms of neuromorphometric analysis of the generated neurons) is also presented. Next, we describe the generation process of neural structures composed of neurons. Using one-layer neural models for orientation specificity encoding, but without considering the neural shape, several cases are simulated, using gradients in the distribution of the synaptic weights and regular or random (uniform) distributions of the neurons in the structures. The extension of these simulations using structures that consider the neural form in more detail, composed of artificial neurons generated by the described method in this monograph is presented in the sequence. We show that the extension of the dendritic arborization is a determinant factor on the convergence rate and selectivity in the models, and that gradients in the extension of the synaptic arborizations are essentials for the adequate codification of orientations in centric models containing distributed random somata.
9

Functional Consequences of Model Complexity in Hybrid Neural-Microelectronic Systems

Sorensen, Michael Elliott 15 April 2005 (has links)
Hybrid neural-microelectronic systems, systems composed of biological neural networks and neuronal models, have great potential for the treatment of neural injury and disease. The utility of such systems will be ultimately determined by the ability of the engineered component to correctly replicate the function of biological neural networks. These models can take the form of mechanistic models, which reproduce neural function by describing the physiologic mechanisms that produce neural activity, and empirical models, which reproduce neural function through more simplified mathematical expressions. We present our research into the role of model complexity in creating robust and flexible behaviors in hybrid systems. Beginning with a complex mechanistic model of a leech heartbeat interneuron, we create a series of three systematically reduced models that incorporate both mechanistic and empirical components. We then evaluate the robustness of these models to parameter variation, and assess the flexibility of the models activities. The modeling studies are validated by incorporating both mechanistic and semi-empirical models in hybrid systems with a living leech heartbeat interneuron. Our results indicate that model complexity serves to increase both the robustness of the system and the ability of the system to produce flexible outputs.
10

The Role of Heterogeneity in Rhythmic Networks of Neurons

Reid, Michael Steven 02 January 2007 (has links)
Engineers often view variability as undesirable and seek to minimize it, such as when they employ transistor-matching techniques to improve circuit and system performance. Biology, however, makes no discernible attempt to avoid this variability, which is particularly evident in biological nervous systems whose neurons exhibit marked variability in their cellular properties. In previous studies, this heterogeneity has been shown to have mixed consequences on network rhythmicity, which is essential to locomotion and other oscillatory neural behaviors. The systems that produce and control these stereotyped movements have been optimized to be energy efficient and dependable, and one particularly well-studied rhythmic network is the central pattern generator (CPG), which is capable of generating a coordinated, rhythmic pattern of motor activity in the absence of phasic sensory input. Because they are ubiquitous in biological preparations and reveal a variety of physiological behaviors, these networks provide a platform for studying a critical set of biological control paradigms and inspire research into engineered systems that exploit these underlying principles. We are directing our efforts toward the implementation of applicable technologies and modeling to better understand the combination of these two concepts---the role of heterogeneity in rhythmic networks of neurons. The central engineering theme of our work is to use digital and analog platforms to design and build Hodgkin--Huxley conductance-based neuron models that will be used to implement a half-center oscillator (HCO) model of a CPG. The primary scientific question that we will address is to what extent this heterogeneity affects the rhythmicity of a network of neurons. To do so, we will first analyze the locations, continuities, and sizes of bursting regions using single-neuron models and will then use an FPGA model neuron to study parametric and topological heterogeneity in a fully-connected 36-neuron HCO. We found that heterogeneity can lead to more robust rhythmic networks of neurons, but the type and quantity of heterogeneity and the population-level metric that is used to analyze bursting are critical in determining when this occurs.

Page generated in 0.4834 seconds