Spelling suggestions: "subject:"cofficient coding"" "subject:"coefficient coding""
1 |
Adaptation in a deep networkRuiz, Vito Manuel 08 July 2011 (has links)
Though adaptational effects are found throughout the visual system, the underlying mechanisms and benefits of this phenomenon are not yet known. In this work, the visual system is modeled as a Deep Belief Network, with a novel “post-training” paradigm (i.e. training the network further on certain stimuli) used to simulate adaptation in vivo. An optional sparse variant of the DBN is used to help bring about meaningful and biologically relevant receptive fields, and to examine the effects of sparsification on adaptation in their own right. While results are inconclusive, there is some evidence of an attractive bias effect in the adapting network, whereby the network’s representations are drawn closer to the adapting stimulus. As a similar attractive bias is documented in human perception as a result of adaptation, there is thus evidence that the statistical properties underlying the adapting DBN also have a role in the adapting visual system, including efficient coding and optimal information transfer given limited resources. These results are irrespective of sparsification. As adaptation has never been tested directly in a neural network, to the author’s knowledge, this work sets a precedent for future experiments. / text
|
2 |
Unsupervised space-time learning in primary visual cortexPrice, Byron Howard 24 January 2023 (has links)
The mammalian visual system is an incredibly complex computation device, capable of performing the various tasks of seeing: navigation, pattern and object recognition, motor coordination, trajectory extrapolation, among others. Decades of research has shown that experience-dependent plasticity of cortical circuitry underlies the impressive ability to rapidly learn many of these tasks and to adjust as required. One particular thread of investigation has focused on unsupervised learning, wherein changes to the visual environment lead to corresponding changes in cortical circuits. The most prominent example of unsupervised learning is ocular dominance plasticity, caused by visual deprivation to one eye and leading to a dramatic re-wiring of cortex. Other examples tend to make more subtle changes to the visual environment through passive exposure to novel visual stimuli. Here, we use one such unsupervised paradigm, sequence learning, to study experience-dependent plasticity in the mouse visual system. Through a combination of theory and experiment, we argue that the mammalian visual system is an unsupervised learning device.
Beginning with a mathematical exploration of unsupervised learning in biology, engineering, and machine learning, we seek a more precise expression of our fundamental hypothesis. We draw connections between information theory, efficient coding, and common unsupervised learning algorithms such as Hebbian plasticity and principal component analysis. Efficient coding suggests a simple rule for transmitting information in the nervous system: use more spikes to encode unexpected information, and fewer spikes to encode expected information. Therefore, expectation violations ought to produce prediction errors, or brief periods of heightened firing when an unexpected event occurs. Meanwhile, modern unsupervised learning algorithms show how such expectations can be learned.
Next, we review data from decades of visual neuroscience research, highlighting the computational principles and synaptic plasticity processes that support biological learning and seeing. By tracking the flow of visual information from the retina to thalamus and primary visual cortex, we discuss how the principle of efficient coding is evident in neural activity. One common example is predictive coding in the retina, where ganglion cells with canonical center-surround receptive fields compute a prediction error, sending spikes to the central nervous system only in response to locally-unpredictable visual stimuli. This behavior can be learned through simple Hebbian plasticity mechanisms. Similar models explain much of the activity of neurons in primary visual cortex, but we also discuss ways in which the theory fails to capture the rich biological complexity.
Finally, we present novel experimental results from physiological investigations of the mouse primary visual cortex. We trained mice by passively exposing them to complex spatiotemporal patterns of light: rapidly-flashed sequences of images. We find evidence that visual cortex learns these sequences in a manner consistent with efficient coding, such that unexpected stimuli tend to elicit more firing than expected ones. Overall, we observe dramatic changes in evoked neural activity across days of passive exposure. Neural responses to the first, unexpected sequence element increase with days of training while responses at other, expected time points either decrease or stay the same. Furthermore, substituting an unexpected element for an expected one or omitting an expected element both cause brief bursts of increased firing. Our results therefore provide evidence for unsupervised learning and efficient coding in the mouse visual system, especially because unexpected events drive prediction errors. Overall, our analysis suggests novel experiments, which could be performed in the near future, and provides a useful framework to understand visual perception and learning.
|
3 |
Differential place marking and differential object markingHaspelmath, Martin 29 May 2024 (has links)
This paper gives an overview of differential placemarking phenomena and
formulates a number of universals that seem to be well supported. Differential place
marking is a situation in which the coding of locative, allative or ablative roles
depends on subclasses of nouns, in particular place names (toponyms), inanimate
common nouns and human nouns. When languages show asymmetric coding
differences depending on such subclasses, they show shorter (and often zero) coding
of place roles with toponyms, and longer (often adpositional rather than affixal)
coding of place roles with human nouns. Like differential objectmarking, differential
place marking can be explained by frequency asymmetries, expectations derived
from frequencies, and the general preference for efficient coding. I also argue that
differential place marking patterns provide an argument against the need to appeal
to ambiguity avoidance to explain differential object marking.
|
4 |
PERFORMANCE OPTIMIZATION OF A STRUCTURED CFD CODE - GHOST ON COMMODITY CLUSTER ARCHITECTURESKristipati, Pavan K. 01 January 2008 (has links)
This thesis focuses on optimizing the performance of an in-house, structured, 2D CFD code – GHOST, on commodity cluster architectures. The basic philosophy of the work is to optimize the cache usage of the code by implementing efficient coding techniques without changing the underlying numerical algorithm. Various optimization techniques that were implemented and the resulting changes in performance have been presented. Two techniques, external and internal blocking that were implemented earlier to tune the performance of this code have been reviewed. What follows is further tuning effort in order to circumvent the problems associated with using the blocking techniques. Later, to establish the universality of the optimization techniques, testing has been done on more complicated test case. All the techniques presented in this thesis have been tested on steady, laminar test cases. It has been proved that optimized versions of the code achieve better performances on variety of commodity cluster architectures chosen in this study.
|
5 |
Codificação Eficiente de Sinais de Eletrocardiograma / Efficient Coding of ECG signalsAraújo, Enio Aguiar de 28 May 2010 (has links)
Made available in DSpace on 2016-08-17T14:53:09Z (GMT). No. of bitstreams: 1
Enio Aguiar de Araujo.pdf: 4163087 bytes, checksum: 164f9ddea4dce2f8f19d8e66d5e8294f (MD5)
Previous issue date: 2010-05-28 / Typically, in the digital processing of electrocardiography signal, linear
transformations are used to turn the signals more tractable in accordance to some
application. For applications such as classification or data compression, it usually
aimed to reduce the redundancy present in the signals, increasing the potential of the
applications. There are various methods usually used for the task, the Fourier transform,
the wavelet transform and principal component analysis. All those methods have any sort
of limitation, being the use of a predefined space, orthogonal spaces or the limitations to
second order statistics. In this work we propose the use of the independent component
analysis method for the encoding of the ECG signals, using as theoretical basis the
neuroscience concept of efficient coding. Two important results were found, the basis
functions space generated by the proposed method is different from the spaces seen on
the usual methods, and, on average, the method can reduce the redundancy of the signal.
We concluded that the traditional methods might not exploit the coding potential of
ECG signals due to their limitations, and also that ICA might be a reliable method for
improving the performance comparing to the current systems. / Tipicamente, em processamento digital de sinais de eletrocardiografia, são
utilizadas transformações lineares visando tornar os sinais mais tratáveis de acordo com a
aplicação. Para aplicações como classificação ou compressão de dados, normalmente temse
como objetivo dessas transformações, reduzir a redundãncia presente nesses sinais,
o que aumenta o potencial dessas aplicações. Existem diversos métodos usualmente
utilizados para essa tarefa, como a transformada de Fourier, a transformada de Wavelets,
e análise de componentes principais. Todos esses métodos tem alguma limitação, seja a
utilização de um espaço predefinido e ortogonal ou utilizar apenas estatística de segunda
ordem. Neste trabalho propomos a utilização do método de análise de componentes
independentes para a codificação de sinais de ECG utilizando como base o conceito da
neurociência de codificação eficiente. Dois resultados importantes foram obtidos, o espaço
de funções bases gerado pelo método proposto se diferencia dos espaços de transformações
utilizados usualmente, e, em média, o método proposto teve maior capacidade de reduzir
a redundância dos sinais. Concluímos que os métodos tradicionais podem não explorar
ao máximo o potencial de codificação de sinais de ECG devido às suas limitações, e que
ICA pode ser um método plausível para melhorarmos os sistemas atualmente utilizados.
|
6 |
PROCESSAMENTO E ANÁLISE DE SINAIS MAMOGRÁFICOS NA DETECÇÃO DO CÂNCER DE MAMA: Diagnóstico Auxiliado por Computador (CAD) / PROCESSING AND ANALYSIS OF MAMMOGRAPHIC SIGNALS IN THE DETECTION OF BREAST CANCER: Computer Aided Diagnosis (CAD)Costa, Daniel Duarte 06 December 2012 (has links)
Made available in DSpace on 2016-08-16T18:18:41Z (GMT). No. of bitstreams: 1
Tese Daniel Duarte Costa.pdf: 3067192 bytes, checksum: b9a8d78583596a2e1dff6298c4a89014 (MD5)
Previous issue date: 2012-12-06 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / Breast cancer is the leading cause of cancer death among women in Western countries. To improve the accuracy of diagnosis by radiologists and doing it so early, new computer vision systems have been developed and improved with the passage of time. Some methods of the detection and classification of lesions in mammography images for computer systems diagnostic (CAD) were developed using different statistical techniques. In this thesis, we present methodologies of CADs systems to detect and classify mass regions in mammographic images, from two image databases: DDSM and MIAS. The results show that it is possible by these methods to obtain a detection rate of up to 96% of mass regions, using efficient coding technique and K-means clustering algorithm. To classify regions in mass or non-mass correctly, was obtained a success rate up to 90% using the independent component analysis (ICA) and linear discriminant analysis (LDA). From these results generated a web application, called SADIM (Sistema de Auxílio a Diagnóstico de Imagem Mamográfica), which can be used by any registered professional. / O câncer de mama é a principal causa de morte por câncer na população feminina dos países ocidentais. Para melhorar a precisão do diagnóstico por radiologistas e fazê-lo de forma precoce, novos sistemas de visão computacional têm sido criados e melhorados com o decorrer do tempo. Alguns métodos de detecção e classificação da lesão em imagens radiológicas, por sistemas de diagnósticos por computador (CAD), foram desenvolvidos utilizando diferentes técnicas estatísticas. Neste trabalho, apresentam-se metodologias de sistemas CADs para detectar e classificar regiões de massa em imagens mamográficas, oriundas de duas bases de imagens: DDSM e MIAS. Os resultados mostram que é possível, através destas metodologias, obter uma taxa de detecção de até 96% das regiões de massa, utilizando a técnica de codificação eficiente com o algoritmo de agrupamento k-means, e classificar corretamente as regiões de massa em até 90% utilizando-se das técnicas de análise de componentes independentes (ICA) e análise discriminante linear (LDA). A partir destes resultados gerou-se uma aplicação web, denominada SADIM (Sistema de Auxílio a Diagnóstico de Imagem Mamográfica), que pode ser utilizado por qualquer profissional cadastrado.
Palavras-chave: processamento de imagens médicas; diagnóstico auxiliado por computador; mamografias análise de imagens; codificação eficiente.
|
7 |
Compressão de imagens utilizando análise de componentes independentes / COMPRESSION FOR IMAGE ANALYSIS USING INDEPENDENT COMPONENTSSousa Junior, Carlos Magno 20 March 2007 (has links)
Made available in DSpace on 2016-08-17T14:53:09Z (GMT). No. of bitstreams: 1
Carlos Magno.pdf: 663844 bytes, checksum: a783e1b5874266e0e7bca44dc3f315ae (MD5)
Previous issue date: 2007-03-20 / Redundancy is an old issue in data compression research. Compression methods
that use statistics have been heavily influenced by neuroscience research. In this work, we
propose an image compression system based on the efficient coding concept derived from neural
information processing models. The system performance is compared with discrete cosine
transform (DCT) and principal components analysis (PCA) results at several compression ratios
(CR). Evaluation through both objective measurements and visual inspection showed that the
proposed system is more robust to distortions such as blocking artifacts than DCT and PCA. / A redundância é um assunto antigo em pesquisa sobre compressão de dados. Os
métodos de compressão de dados que usam estatísticas foram recentemente influenciados
pelas pesquisas em neurociência. Neste trabalho, propomos um sistema de compressão de
imagem baseado no conceito de codificação eficiente derivado dos modelos de processamento
da informação neural. O desempenho do sistema é comparado aos resultados da transformada
discreta cosseno (DCT) e análise de componentes principais (PCA) com a mesma taxa de
compressão (CR). A avaliação através das medidas objetiva e visual mostrou que o sistema
proposto apresentou menos distorções, tais como artefatos de blocos do que a DCT e PCA.
|
8 |
Le statisticien neuronal : comment la perspective bayésienne peut enrichir les neurosciences / The neuronal statistician : how the Bayesian perspective can enrich neuroscienceDehaene, Guillaume 09 September 2016 (has links)
L'inférence bayésienne répond aux questions clés de la perception, comme par exemple : "Que faut-il que je crois étant donné ce que j'ai perçu ?". Elle est donc par conséquent une riche source de modèles pour les sciences cognitives et les neurosciences (Knill et Richards, 1996). Cette thèse de doctorat explore deux modèles bayésiens. Dans le premier, nous explorons un problème de codage efficace, et répondons à la question de comment représenter au mieux une information probabiliste dans des neurones pas parfaitement fiables. Nous innovons par rapport à l'état de l'art en modélisant une information d'entrée finie dans notre modèle. Nous explorons ensuite un nouveau modèle d'observateur optimal pour la localisation d'une source sonore grâce à l’écart temporel interaural, alors que les modèles actuels sont purement phénoménologiques. Enfin, nous explorons les propriétés de l'algorithme d'inférence approximée "Expectation Propagation", qui est très prometteur à la fois pour des applications en apprentissage automatique et pour la modélisation de populations neuronales, mais qui est aussi actuellement très mal compris. / Bayesian inference answers key questions of perception such as: "What should I believe given what I have perceived ?". As such, it is a rich source of models for cognitive science and neuroscience (Knill and Richards, 1996). This PhD manuscript explores two such models. We first investigate an efficient coding problem, asking the question of how to best represent probabilistic information in unrealiable neurons. We innovate compared to older such models by introducing limited input information in our own. We then explore a brand new ideal observer model of localization of sounds using the Interaural Time Difference cue, when current models are purely descriptive models of the electrophysiology. Finally, we explore the properties of the Expectation Propagation approximate-inference algorithm, which offers great potential for both practical machine-learning applications and neuronal population models, but is currently very poorly understood.
|
9 |
Campos receptivos similares às wavelets de Haar são gerados a partir da codificação eficiente de imagens urbanas;V1 / Receptive fields similar to those of wavelets are generated by Haar from the consolidation of efficient urban imagesCavalcante, André Borges 25 February 2008 (has links)
Made available in DSpace on 2016-08-17T14:52:43Z (GMT). No. of bitstreams: 1
Andre Borges Cavalcante.pdf: 1739525 bytes, checksum: 2073615c7df203b086d5c76276905a35 (MD5)
Previous issue date: 2008-02-25 / Efficient coding of natural images yields filters similar to the Gabor-like receptive
fields of simple cells of primary visual cortex. However, natural and man-made images have
different statistical proprieties. Here we show that a simple theoretical analysis of power spectra
in a sparse model suggests that natural and man-made images would need specific filters for
each group. Indeed, when applying sparse coding to man-made scenes, we found both Gabor
and Haar wavelet-like filters. Furthermore, we found that man-made images when projected on
those filters yielded smaller mean squared error than when projected on Gabor-like filters only.
Thus, as natural and man-made images require different filters to be efficiently represented,
these results suggest that besides Gabor, the primary visual cortex should also have cells with
Haar-like receptive fields. / A codificação eficiente de imagens naturais gera filtros similares às wavelets de
Gabor que relembram os campos receptivos de células simples do córtex visual primário. No
entanto, imagens naturais e urbanas tem características estatísticas diferentes. Será mostrado
que uma simples análise do espectro de potência em um modelo eficiente sugere que imagens
naturais e urbanas requerem filtros específicos para cada grupo. De fato, aplicando codificação
eficiente à imagens urbanas, encontramos filtros similares às wavelets de Gabor e de Haar. Além
disso, observou-se que imagens urbanas quando projetadas nesses filtros geraram um menor
erro médio quadrático do que quando projetadas somente em filtros de similares a Gabor. Desta
forma, como imagens naturais e urbanas requerem filtros diferentes para serem representadas de
forma eficiente, estes resultados sugerem que além de Gabor, o córtex visual primário também
deve possuir células com campos receptivos similares às wavelets de Haar.
|
Page generated in 0.1091 seconds