• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 9
  • 9
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Neural Correlates of Emotion Word Processing in Bilinguals: An fNIRS Study

Ortega Manchego, Daniela Andrea 12 April 2024 (has links) (PDF)
Despite increasing interest in the interface between emotion word processing and bilingualism, the representation of valence during emotion word processing in the bilingual brain remains unclear. In the present study, we used functional near-infrared spectroscopy (fNIRS) to investigate the neural correlates of written emotion words in a first (L1) and a second (L2) language. Native English (16) and native Chinese (16) bilingual participants rated emotion words in their first and second language while we recorded their brain activity. Our results show distinct neural processing patterns between L1 and L2, with the former eliciting increased overall activation in the dorsolateral prefrontal cortex (DLPF) during an emotional rating task. Our results suggest increased neural activity in the left hemisphere for positive words and the right hemisphere for negative words during L1 processing. Intriguingly, we observed the opposite pattern during L2 processing. Emotion condition elicited a statistically significant difference in ratings and response times across groups. Implications for research on bilingualism and emotion are discussed.
2

A population gain control model of spatiotemporal responses in the visual cortex

Sit, Yiu Fai 22 March 2011 (has links)
The mammalian brain is a complex computing system that contains billions of neurons and trillions of connections. Is there a general principle that governs the processing in such large neural populations? This dissertation attempts to address this question using computational modeling and quantitative analysis of direct physiological measurements of large neural populations in the monkey primary visual cortex (V1). First, the complete spatiotemporal dynamics of V1 responses over the entire region that is activated by small stationary stimuli are characterized quantitatively. The dynamics of the responses are found to be systematic but complex. Importantly, they are inconsistent with many popular computational models of neural processing. Second, a simple population gain control (PGC) model that can account for these complex response properties is proposed for the small stationary stimuli. The PGC model is then used to predict the responses to stimuli composed of two elements and stimuli that move at a constant speed. The predictions of the model are consistent with the measured responses in V1 for both stimuli. PGC is the first model that can account for the complete spatiotemporal dynamics of V1 population responses for different types of stimuli, suggesting that gain control is a general mechanism of neural processing. / text
3

Atividade multimodal no c?rtex sensorial prim?rio de ratos.

Caixeta, Fabio Viegas 26 February 2010 (has links)
Made available in DSpace on 2014-12-17T15:36:57Z (GMT). No. of bitstreams: 1 FabioVC.pdf: 1566053 bytes, checksum: ff1176922095055a7b4aa58b50429d3f (MD5) Previous issue date: 2010-02-26 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / The currently accepted model of sensory processing states that different senses are processed in parallel, and that the activity of specific cortical regions define the sensorial modality perceived by the subject. In this work we used chronic multielectrode extracellular recordings to investigate to which extent neurons in the visual and tactile primary cortices (V1 and S1) of anesthetized rats would respond to sensory modalities not traditionaly associated with these cortices. Visual stimulation yielded 87% of responsive neurons in V1, while 82% of S1 neurons responded to tactile stimulation. In the same stimulation sessions, we found 23% of V1 neurons responding to tactile stimuli and 22% of S1 neurons responding to visual stimuli. Our data supports an increasing body of evidence that indicates the existence multimodal processing in primary sensory cortices. Our data challenge the unimodal sensory processing paradigm, and suggest the need of a reinterpretation of the currently accepted model of cortical hierarchy. / O modelo de processamento sensorial mais aceito atualmente afirma que os sentidos s?o processados em paralelo, e que a atividade de c?rtices sensoriais espec?ficos define a modalidade sens?ria percebida subjetivamente. Neste trabalho utilizamos registros eletrofisiol?gicos cr?nicos de m?ltiplos neur?nios para investigar se neur?nios nos c?rtices prim?rios visual (V1) e t?til (S1) de ratos anestesiados podem responder a est?mulos das modalidades sensoriais n?o associadas tradicionalmente a estes c?rtices. Durante a estimula??o visual, 87% dos neur?nios de V1 foram responsivos, enquanto 82% dos neur?nios de S1 responderam ? estimula??o t?til. Nos mesmos registros, encontramos 23% dos neur?nios de V1 responsivos a est?mulos t?teis e 22% dos neur?nios de S1 responsivos a est?mulos visuais. Nossos dados corroboram uma crescente s?rie de evid?ncias que indica a presen?a de processamento multimodal nos c?rtices sensoriais prim?rios, o que desafia o paradigma do processamento sensorial unimodal e sugere a necessidade de uma reinterpreta??o do modelo de hierarquia cortical atualmente aceito.
4

NEURAL AND BEHAVIORAL RESPONSES TO COMPLEX ODOR STIMULI USING CRAYFISH AS A MODEL SYSTEM

WOLF, MARY CAROLINE 07 November 2005 (has links)
No description available.
5

Les locuteurs d’une langue tonale sont-ils de meilleurs musiciens? Effet potentiel de la connaissance native d’une langue à tons sur la perception du contraste du pitch

Li, Na 11 1900 (has links)
Ce mémoire consiste à offre un survol des études neuropsychologiques et électrophysiologiques concernant l’interaction possible entre le traitement du langage et la musique. Le but principal est de déterminer les raisons possibles pour lesquelles des locuteurs d’une langue à tons auraient une meilleure capacité dans la perception du contraste du pitch en musique par rapport aux individus qui ont pour langue maternelle une langue à intonation. Dans un premier temps, nous discutons du traitement neuronal de la prosodie et de la musique, tentant de montrer le chevauchement du traitement cérébral des deux domaines. Ensuite, nous présentons des notions d’une langue tonale ainsi que le traitement neuronal des tons lexicaux. Après, nous discutons des effets de transfert de la capacité du traitement du pitch en linguistique et en musique, en nous focalisant sur l’influence de la connaissance native d’une langue tonale sur la perception musicale. Pour ce faire, l’encodage du pitch et la localisation hémisphérique du traitement des tons lexicaux et la musique serons discutés. / This thesis gives an overview of neuropsychological and electrophysiological studies about the possible interaction between the processing of language and music. Our main purpose is to examine the possible reasons for which tone language speakers have a better capacity in perceiving pitch contrast in music than native speakers of an intonational language. First, we discuss the neural processing of prosody and music, attempting to show an overlap between the two domains. Next, we present the concept of a tone langue and the neural processing of lexical tones. Afterwards, we discuss the transfer effects of the processing capacity of pitch in linguistic and music by focusing on the influence of a knowledge of a tone language on the musical perception. To do this, the encoding of pitch and the hemispheric specialization will be discussed.
6

Hardware implementation of re-configurable Restricted Boltzmann Machines for image recognition

Desai, Soham Jayesh 08 June 2015 (has links)
The Internet of Things (IoTs) has triggered rapid advances in sensors, surveillance devices, wearables and body area networks with advanced Human-Computer Interfaces (HCI). Neural Networks optimized algorithmically for high accuracy and high representation power are very deep and require tremendous storage and processing capabilities leading to higher area and power costs. For developing smart front-ends for ‘always on’ sensor nodes we need to optimize for power and area. This requires considering trade-offs with respect to various entities such as resource utilization, processing time, area, power, accuracy etc. Our experimental results show that there is presence of a network configuration with minimum energy given the input constraints of an application in consideration. This presents the need for a hardware-software co-design approach. We present a highly parameterized hardware design on an FPGA allowing re-configurability and the ability to evaluate different design choices in a short amount of time. We also describe the capability of extending our design to offer run time configurability. This allows the design to be altered for different applications based on need and also allows the design to be used as a cascaded classifier beneficial for continuous sensing for low power applications. This thesis aims to evaluate the use of Restricted Boltzmann Machines for building such re-configurable low power front ends. We develop the hardware architecture for such a system and provide experimental results obtained for the case study of Posture detection for body worn cameras used for law enforcement.
7

Computational Principles of Neural Processing: modulating neural systems through temporally structured stimuli

Castellano, Marta 11 December 2014 (has links)
In order to understand how the neural system encodes and processes information, research has focused on the study of neural representations of simple stimuli, paying no particular attention to it's temporal structure, with the assumption that a deeper understanding of how the neural system processes simpli fied stimuli will lead to an understanding of how the brain functions as a whole [1]. However, time is intrinsically bound to neural processing as all sensory, motor, and cognitive processes are inherently dynamic. Despite the importance of neural and stimulus dynamics, little is known of how the neural system represents rich spatio-temporal stimulus, which ultimately link the neural system to a continuously changing environment. The purpose of this thesis is to understand whether and how temporally-structured neural activity modulates the processing of information within the brain, proposing in turn that, the precise interaction between the spatio-temporal structure of the stimulus and the neural system is particularly relevant, particularly when considering the ongoing plasticity mechanisms which allow the neural system to learn from experience. In order to answer these questions, three studies were conducted. First, we studied the impact of spiking temporal structure on a single neuron spiking response, and explored in which way the functional connections to pre-synaptic neurons are modulated through adaptation. Our results suggest that, in a generic spiking neuron, the temporal structure of pre-synaptic excitatory and inhibitory neurons modulate both the spiking response of that same neuron and, most importantly, the speed and strength of learning. In the second, we present a generic model of a spiking neural network that processes rich spatio-temporal stimuli, and explored whether the processing of stimulus within the network is modulated due to the interaction with an external dynamical system (i.e. extracellular media), as well as several plasticity mechanisms. Our results indicate that the memory capacity, that re ects a dynamic short-term memory of incoming stimuli, can be extended on the presence of plasticity and through the interaction with an external dynamical system, while maintaining the network dynamics in a regime suitable for information processing. Finally, we characterized cortical signals of human subjects (electroencephalography, EEG) associated to a visual categorization task. Among other aspects, we studied whether changes in the dynamics of the stimulus leads to a changes in the neural processing at the cortical level, and introduced the relevance of large-scale integration for cognitive processing. Our results suggest that the dynamic synchronization across distributed cortical areas is stimulus specific and specifically linked to perceptual grouping. Taken together, the results presented here suggest that the temporal structure of the stimulus modulates how the neural system encodes and processes information within single neurons, network of neurons and cortical areas. In particular, the results indicate that timing modulates single neuron connectivity structures, the memory capability of networks of neurons, and the cortical representation of a visual stimuli. While the learning of invariant representations remains as the best framework to account for a number of neural processes (e.g. long-term memory [2]), the reported studies seem to provide support the idea that, at least to some extent, the neural system functions in a non-stationary fashion, where the processing of information is modulated by the stimulus dynamics itself. Altogether, this thesis highlights the relevance of understanding adaptive processes and their interaction with the temporal structure of the stimulus, arguing that a further understanding how the neural system processes dynamic stimuli is crucial for the further understanding of neural processing itself, and any theory that aims to understand neural processing should consider the processing of dynamic signals. 1. Frankish, K., and Ramsey, W. The Cambridge Handbook of Cognitive Science. Cambridge University Press, 2012. // 2. McGaugh, J. L. Memory{a Century of Consolidation. Science 287, 5451 (Jan. 2000), 248{251.
8

The bi-level input processing model of first and second language perception

Grenon, Izabelle 21 July 2010 (has links)
The focus of the current work is the articulation of a model of speech sound perception, which is informed by neurological processing, and which accounts for psycholinguistic behavior related to the perception of linguistic units such as features, allophones and phonemes. The Bi-Level Input Processing (BLIP) model, as the name suggests, proposes two levels of speech processing: the neural mapping level and the phonological level. The model posits that perception of speech sounds corresponds to the processing of a limited number of acoustic components by neural maps tuned to these components, where each neural map corresponds to a contrastive speech category along the relevant acoustic dimension in the listener's native language. These maps are in turn associated with abstract features at the phonological level, and the combination of multiple maps can represent a segment (or phoneme), mora or syllable. To evaluate the processing of multiple acoustic cues for categorization of speech contrasts by listeners, it may be relevant to distinguish between different types of processing. Three types of processing are identified and described in this work: additive, connective and competitive. The way speech categories are processed by the neurology in one's L1 may impact the perception and acquisition of non-native speech contrasts later in life. Accordingly, five predictions about the perception of non-native contrasts by mature listeners are derived from the proposals of the BLIP model. These predictions are exemplified and supported by means of four perceptual behavioral experiments. Experiments I and II evaluate the use of spectral information (changes in F1 and F2) and vowel duration for identification of an English vowel contrast ('beat' vs. 'bit') by native North American English, Japanese and Canadian French speakers. Experiments III and IV evaluate the use of vowel duration and periodicity for identification of an English voicing contrast ('bit' vs. 'bid') by the same speakers. Results of these experiments demonstrate that the BLIP model correctly predicts sources of difficulty for L2 learners in perceiving non-native sounds, and that, in many cases, L2 learners are able to capitalize on their sensitivity to acoustic cues used in L1 to perceive novel (L2) contrasts, even if those contrasts are neutralized at the phonological level in L1. Hence, the BLIP model has implications not only for the study of L1 development and cross-linguistic comparisons, but also for a better understanding of L2 perception. Implications of this novel approach to L2 research for language education are briefly discussed.
9

Research and Design of Neural Processing Architectures Optimized for Embedded Applications

Wu, Binyi 28 May 2024 (has links)
Der Einsatz von neuronalen Netzen in Edge-Geräten und deren Einbindung in unser tägliches Leben findet immer mehr Beachtung. Ihre hohen Rechenkosten machen jedoch viele eingebettete Anwendungen zu einer Herausforderung. Das Hauptziel meiner Doktorarbeit ist es, einen Beitrag zur Lösung dieses Dilemmas zu leisten: die Optimierung neuronaler Netze und die Entwicklung entsprechender neuronaler Verarbeitungseinheiten für Endgeräte. Diese Arbeit nahm die algorithmische Forschung als Ausgangspunkt und wandte dann deren Ergebnisse an, um das Architekturdesign von Neural Processing Units (NPUs) zu verbessern. Die Optimierung neuronaler Netzwerkmodelle begann mit der Quantisierung neuronaler Netzwerke mit einfacher Präzision und entwickelte sich zu gemischter Präzision. Die Entwicklung der NPU-Architektur folgte den Erkenntnissen der Algorithmusforschung, um ein Hardware/Software Co-Design zu erreichen. Darüber hinaus wurde ein neuartiger Ansatz zur gemeinsamen Entwicklung von Hardware und Software vorgeschlagen, um das Prototyping und die Leistungsbewertung von NPUs zu beschleunigen. Dieser Ansatz zielt auf die frühe Entwicklungsphase ab. Er hilft Entwicklern, sich auf das Design und die Optimierung von NPUs zu konzentrieren und verkürzt den Entwicklungszyklus erheblich. Im Abschlussprojekt wurde ein auf maschinellem Lernen basierender Ansatz angewendet, um die Rechen- und Speicherressourcen der NPU zu erkunden and optimieren. Die gesamte Arbeit umfasst mehrere verschiedene Bereiche, von der Algorithmusforschung bis zum Hardwaredesign. Sie alle arbeiten jedoch an der Verbesserung der Inferenz-Effizienz neuronaler Netze. Die Optimierung der Algorithmen zielt insbesondere darauf ab, den Speicherbedarf und die Rechenkosten von neuronalen Netzen zu verringern. Das NPU-Design hingegen konzentriert sich auf die Verbesserung der Nutzung von Hardwareressourcen. Der vorgeschlagene Ansatz zur gemeinsamen Entwicklung von Software und Hardware verkürzt den Entwurfszyklus und beschleunigt die Entwurfsiterationen. Die oben dargestellte Reihenfolge entspricht dem Aufbau dieser Dissertation. Jedes Kapitel ist einem Thema gewidmet und umfasst relevante Forschungsarbeiten, Methodik und Versuchsergebnisse.:1 Introduction 2 Convolutional Neural Networks 2.1 Convolutional layer 2.1.1 Padding 2.1.2 Convolution 2.1.3 Batch Normalization 2.1.4 Nonlinearity 2.2 Pooling Layer 2.3 Fully Connected Layer 2.4 Characterization 2.4.1 Composition of Operations and Parameters 2.4.2 Arithmetic Intensity 2.5 Optimization 3 Quantization with Double-Stage Squeeze-and-Threshold 19 3.1 Overview 3.1.1 Binarization 3.1.2 Multi-bit Quantization 3.2 Quantization of Convolutional Neural Networks 3.2.1 Quantization Scheme 3.2.2 Operator fusion of Conv2D 3.3 Activation Quantization with Squeeze-and-Threshold 3.3.1 Double-Stage Squeeze-and-Threshold 3.3.2 Inference Optimization 3.4 Experiment 3.4.1 Ablation Study of Squeeze-and-Threshold 3.4.2 Comparison with State-of-the-art Methods 3.5 Summary 4 Low-Precision Neural Architecture Search 39 4.1 Overview 4.2 Differentiable Architecture Search 4.2.1 Gumbel Softmax 4.2.2 Disadvantage and Solution 4.3 Low-Precision Differentiable Architecture Search 4.3.1 Convolution Sharing 4.3.2 Forward-and-Backward Scaling 4.3.3 Power Estimation 4.3.4 Architecture of Supernet 4.4 Experiment 4.4.1 Effectiveness of solutions to the dominance problem 4.4.2 Softmax and Gumbel Softmax 4.4.3 Optimizer and Inverted Learning Rate Scheduler 4.4.4 NAS Method Evaluation 4.4.5 Searched Model Analysis 4.4.6 NAS Cost Analysis 4.4.7 NAS Training Analysis 4.5 Summary 5 Configurable Sparse Neural Processing Unit 65 5.1 Overview 5.2 NPU Architecture 5.2.1 Buffer 5.2.2 Reshapeable Mixed-Precision MAC Array 5.2.3 Sparsity 5.2.4 Post Process Unit 5.3 Mapping 5.3.1 Mixed-Precision MAC 5.3.2 MAC Array 5.3.3 Support of Other Operation 5.3.4 Configurability 5.4 Experiment 5.4.1 Performance Analysis of Runtime Configuration 5.4.2 Roofline Performance Analysis 5.4.3 Mixed-Precision 5.4.4 Comparison with Cortex-M7 5.5 Summary 6 Agile Development and Rapid Design Space Exploration 91 6.1 Overview 6.1.1 Agile Development 6.1.2 Design Space Exploration 6.2 Agile Development Infrastructure 6.2.1 Chisel Backend 6.2.2 NPU Software Stack 6.3 Modeling and Exploration 6.3.1 Area Modeling 6.3.2 Performance Modeling 6.3.3 Layered Exploration Framework 6.4 Experiment 6.4.1 Efficiency of Agile Development Infrastructure 6.4.2 Effectiveness of Agile Development Infrastructure 6.4.3 Area Modeling 6.4.4 Performance Modeling 6.4.5 Rapid Exploration and Pareto Front 6.5 Summary 7 Summary and Outlook 123 7.1 Summary 7.2 Outlook A Appendix of Double-Stage ST Quantization 127 A.1 Training setting of ResNet-18 in Table 3.3 A.2 Training setting of ReActNet in Table 3.4 A.3 Training setting of ResNet-18 in Table 3.4 A.4 Pseudocode Implementation of Double-Stage ST B Appendix of Low-Precision Neural Architecture Search 131 B.1 Low-Precision NAS on CIFAR-10 B.2 Low-Precision NAS on Tiny-ImageNet B.3 Low-Precision NAS on ImageNet Bibliography 137 / Deploying neural networks on edge devices and bringing them into our daily lives is attracting more and more attention. However, its expensive computational cost makes many embedded applications daunting. The primary objective of my doctoral studies is to make contributions towards resolving this predicament: optimizing neural networks and designing corresponding efficient neural processing units for edge devices. This work took algorithmic research, specifically the optimization of deep neural networks, as a starting point and then applied its findings to steer the architecture design of Neural Processing Units (NPUs). The optimization of neural network models started with single precision neural network quantization and progressed to mixed precision. The NPU architecture development followed the algorithmic research findings to achieve hardware/software co-design. Furthermore, a new approach to hardware and software co-development was introduced, aimed at expediting the prototyping and performance assessment of NPUs. This approach targets early-stage development. It helps developers to focus on the design and optimization of NPUs and significantly shortens the development cycle. In the final project, a machine learning-based approach was applied to explore and optimize the computational and memory resources of the NPU. The entire work covers several different areas, from algorithmic research to hardware design. But they all work on improving the inference efficiency of neural networks. Specifically, algorithm optimization aims to reduce the memory footprint and computational cost of neural networks. The NPU design, on the other hand, focuses on improving the utilization of hardware resources. The proposed software and hardware co-development approach shortens the design cycle and speeds up the design iteration. The order presented above corresponds to the structure of this dissertation. Each chapter corresponds to a topic and covers relevant research, methodology, and experimental results.:1 Introduction 2 Convolutional Neural Networks 2.1 Convolutional layer 2.1.1 Padding 2.1.2 Convolution 2.1.3 Batch Normalization 2.1.4 Nonlinearity 2.2 Pooling Layer 2.3 Fully Connected Layer 2.4 Characterization 2.4.1 Composition of Operations and Parameters 2.4.2 Arithmetic Intensity 2.5 Optimization 3 Quantization with Double-Stage Squeeze-and-Threshold 19 3.1 Overview 3.1.1 Binarization 3.1.2 Multi-bit Quantization 3.2 Quantization of Convolutional Neural Networks 3.2.1 Quantization Scheme 3.2.2 Operator fusion of Conv2D 3.3 Activation Quantization with Squeeze-and-Threshold 3.3.1 Double-Stage Squeeze-and-Threshold 3.3.2 Inference Optimization 3.4 Experiment 3.4.1 Ablation Study of Squeeze-and-Threshold 3.4.2 Comparison with State-of-the-art Methods 3.5 Summary 4 Low-Precision Neural Architecture Search 39 4.1 Overview 4.2 Differentiable Architecture Search 4.2.1 Gumbel Softmax 4.2.2 Disadvantage and Solution 4.3 Low-Precision Differentiable Architecture Search 4.3.1 Convolution Sharing 4.3.2 Forward-and-Backward Scaling 4.3.3 Power Estimation 4.3.4 Architecture of Supernet 4.4 Experiment 4.4.1 Effectiveness of solutions to the dominance problem 4.4.2 Softmax and Gumbel Softmax 4.4.3 Optimizer and Inverted Learning Rate Scheduler 4.4.4 NAS Method Evaluation 4.4.5 Searched Model Analysis 4.4.6 NAS Cost Analysis 4.4.7 NAS Training Analysis 4.5 Summary 5 Configurable Sparse Neural Processing Unit 65 5.1 Overview 5.2 NPU Architecture 5.2.1 Buffer 5.2.2 Reshapeable Mixed-Precision MAC Array 5.2.3 Sparsity 5.2.4 Post Process Unit 5.3 Mapping 5.3.1 Mixed-Precision MAC 5.3.2 MAC Array 5.3.3 Support of Other Operation 5.3.4 Configurability 5.4 Experiment 5.4.1 Performance Analysis of Runtime Configuration 5.4.2 Roofline Performance Analysis 5.4.3 Mixed-Precision 5.4.4 Comparison with Cortex-M7 5.5 Summary 6 Agile Development and Rapid Design Space Exploration 91 6.1 Overview 6.1.1 Agile Development 6.1.2 Design Space Exploration 6.2 Agile Development Infrastructure 6.2.1 Chisel Backend 6.2.2 NPU Software Stack 6.3 Modeling and Exploration 6.3.1 Area Modeling 6.3.2 Performance Modeling 6.3.3 Layered Exploration Framework 6.4 Experiment 6.4.1 Efficiency of Agile Development Infrastructure 6.4.2 Effectiveness of Agile Development Infrastructure 6.4.3 Area Modeling 6.4.4 Performance Modeling 6.4.5 Rapid Exploration and Pareto Front 6.5 Summary 7 Summary and Outlook 123 7.1 Summary 7.2 Outlook A Appendix of Double-Stage ST Quantization 127 A.1 Training setting of ResNet-18 in Table 3.3 A.2 Training setting of ReActNet in Table 3.4 A.3 Training setting of ResNet-18 in Table 3.4 A.4 Pseudocode Implementation of Double-Stage ST B Appendix of Low-Precision Neural Architecture Search 131 B.1 Low-Precision NAS on CIFAR-10 B.2 Low-Precision NAS on Tiny-ImageNet B.3 Low-Precision NAS on ImageNet Bibliography 137

Page generated in 0.0915 seconds