• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • Tagged with
  • 7
  • 7
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The role of prediction error in probabilistic associative learning

Cevora, Jiri January 2018 (has links)
This thesis focuses on probabilistic associative learning. One of the classic effects in this field is the stimulus associability effect for which I derive a statistically optimal inference model and a corresponding approximation that addresses a number of problems with the original account of Mackintosh. My proposed account of associability - a variable learning rate depending on a relative informativeness of stimuli - also accounts of the classic blocking effect \cite{kamin1969predictability} without the need for Prediction Error [PE] computation. Given that blocking was the main impetus for placing PE at the centre of learning theories, I critically re-evaluate other evidence for PE in learning, particularly the recent neuroimaging evidence. I conclude that the brain data are not as clear cut as often presumed. The main shortcoming of the evidence implicating PE in learning is that probabilistic associative learning is mostly described as a transition from one state of belief to another, yet those beliefs are typically observed only after multiple learning episodes and in a very coarse manner. To address this problem, I develop an experimental paradigm and accompanying statistical methods that allow one to infer the beliefs at any given point in time. However, even with the rich data provided by this new paradigm, the blocking effect still cannot provide conclusive evidence for the role of PE in learning. I solve this problem by deriving a novel conceptualisation of learning as a flow in probability space. This allows me to derive two novel effects that can unambiguously distinguish learning that is driven by PE from learning not driven by PE. I call these effectsgeneralized blocking and false blocking, given their inspiration by the original paradigm of Kamin (1969). These two effects can be generalized to the entirety of probability space, rather than just the two specific points provided by the paradigms used by Mackintosh and Kamin, and therefore offer greater sensitivity to differences in learning mechanisms. In particular, I demonstrate that these effects are necessary consequences of PE-driven learning, but not learning based on the relative informativeness of stimuli. Lastly I develop an online experiment to acquire data on the new paradigm from a large number (approximately 2000) of participants recruited via social media. The results of model fitting, together with statistical tests of generalized blocking and false blocking, provide strong evidence against a PE-driven account of learning, instead favouring the relative informativeness account derived at the start of the thesis.
2

Phase Validation Of Neurotoxic Animal Models Of Parkinson

Telkes, Ilknur 01 December 2012 (has links) (PDF)
Parkinson&rsquo / s disease (PD) is characterized by the progressive loss of dopaminergic nigral neurons and striatal dopamine resulting in serious motor deficits but also some non-motor anomalies. Animal models of human neurodegenerative diseases are essential for better understanding their pathogenesis and developing efficient therapeutic tools. There are many different PD models, however, none of them is fully reproducing all the symptoms of the disease. In addition, different investigators use different behavioral measures which makes even more difficult to compare and evaluate the results. The aim of the present study was to compare motor and cognitive deficits in two most common models of PD: the Rotenone and 6-OHDA model, using a large battery of neurological tests and a probabilistic learning task. To the best of our knowledge, this is the first study to examine the effects of bilaterally induced Rotenone and 6-OHDA through behavioral test batteries assessing the cardinal motor symptoms and the cognitive abnormality of Parkinson&rsquo / s Disease in the same rat population. Also, the present study is unique on the basis of providing both longitudinal observations of behaviour in the same treatment group and the cross-sectional comparisons of the behavioural responses between different groups. In the current study, the neurotoxins were applied at relatively low doses of 3-4 &mu / g, bilaterally to the substantia nigra pars compacta (SNpc). Experiments were conducted on 50 young-adult male Sprague&ndash / Dawley rats randomly assigned to five experimental groups: Rotenone, 6-OHDA, vehicle (DMSO/Saline), and the intact control. The neurological tests included locomotor activity,catalepsy, rearing, stepping, and rotarod/accelerod tests. They were applied prior to, and on days 4-7-10-20-40-150 while the learning task was applied 49 days after drug infusion.During the first 2 postoperational months, both neurotoxins produced progressive deterioration in motor performance but showing no effect on cognitive functions. Five months after the surgery, regression of bradykinesia but persistence of sensorimotor deficits was noted. The tests&rsquo / results suggest different susceptibility of different motor functions to the degeneration of nigro-striatal (N-S) pathway. So, different tests were demonstrated to have different power in detecting similar motor deficits.
3

Probabilistic incremental learning for image recognition : modelling the density of high-dimensional data

Carvalho, Edigleison Francelino January 2014 (has links)
Atualmente diversos sistemas sensoriais fornecem dados em fluxos e essas observações medidas são frequentemente de alta dimensionalidade, ou seja, o número de variáveis medidas é grande, e as observações chegam em sequência. Este é, em particular, o caso de sistemas de visão em robôs. Aprendizagem supervisionada e não-supervisionada com esses fluxos de dados é um desafio, porque o algoritmo deve ser capaz de aprender com cada observação e depois descartá-la antes de considerar a próxima, mas diversos métodos requerem todo o conjunto de dados a fim de estimar seus parâmetros e, portanto, não são adequados para aprendizagem em tempo real. Além disso, muitas abordagens sofrem com a denominada maldição da dimensionalidade (BELLMAN, 1961) e não conseguem lidar com dados de entrada de alta dimensionalidade. Para superar os problemas descritos anteriormente, este trabalho propõe um novo modelo de rede neural probabilístico e incremental, denominado Local Projection Incremental Gaussian Mixture Network (LP-IGMN), que é capaz de realizar aprendizagem perpétua com dados de alta dimensionalidade, ou seja, ele pode aprender continuamente considerando a estabilidade dos parâmetros do modelo atual e automaticamente ajustar sua topologia levando em conta a fronteira do subespaço encontrado por cada neurônio oculto. O método proposto pode encontrar o subespaço intrísico onde os dados se localizam, o qual é denominado de subespaço principal. Ortogonal ao subespaço principal, existem as dimensões que são ruidosas ou que carregam pouca informação, ou seja, com pouca variância, e elas são descritas por um único parâmetro estimado. Portanto, LP-IGMN é robusta a diferentes fontes de dados e pode lidar com grande número de variáveis ruidosas e/ou irrelevantes nos dados medidos. Para avaliar a LP-IGMN nós realizamos diversos experimentos usando conjunto de dados simulados e reais. Demonstramos ainda diversas aplicações do nosso método em tarefas de reconhecimento de imagens. Os resultados mostraram que o desempenho da LP-IGMN é competitivo, e geralmente superior, com outras abordagens do estado da arte, e que ela pode ser utilizada com sucesso em aplicações que requerem aprendizagem perpétua em espaços de alta dimensionalidade. / Nowadays several sensory systems provide data in ows and these measured observations are frequently high-dimensional, i.e., the number of measured variables is large, and the observations are arriving in a sequence. This is in particular the case of robot vision systems. Unsupervised and supervised learning with such data streams is challenging, because the algorithm should be capable of learning from each observation and then discard it before considering the next one, but several methods require the whole dataset in order to estimate their parameters and, therefore, are not suitable for online learning. Furthermore, many approaches su er with the so called curse of dimensionality (BELLMAN, 1961) and can not handle high-dimensional input data. To overcome the problems described above, this work proposes a new probabilistic and incremental neural network model, called Local Projection Incremental Gaussian Mixture Network (LP-IGMN), which is capable to perform life-long learning with high-dimensional data, i.e., it can continuously learn considering the stability of the current model's parameters and automatically adjust its topology taking into account the subspace's boundary found by each hidden neuron. The proposed method can nd the intrinsic subspace where the data lie, which is called the principal subspace. Orthogonal to the principal subspace, there are the dimensions that are noisy or carry little information, i.e., with small variance, and they are described by a single estimated parameter. Therefore, LP-IGMN is robust to di erent sources of data and can deal with large number of noise and/or irrelevant variables in the measured data. To evaluate LP-IGMN we conducted several experiments using simulated and real datasets. We also demonstrated several applications of our method in image recognition tasks. The results have shown that the LP-IGMN performance is competitive, and usually superior, with other stateof- the-art approaches, and it can be successfully used in applications that require life-long learning in high-dimensional spaces.
4

Probabilistic incremental learning for image recognition : modelling the density of high-dimensional data

Carvalho, Edigleison Francelino January 2014 (has links)
Atualmente diversos sistemas sensoriais fornecem dados em fluxos e essas observações medidas são frequentemente de alta dimensionalidade, ou seja, o número de variáveis medidas é grande, e as observações chegam em sequência. Este é, em particular, o caso de sistemas de visão em robôs. Aprendizagem supervisionada e não-supervisionada com esses fluxos de dados é um desafio, porque o algoritmo deve ser capaz de aprender com cada observação e depois descartá-la antes de considerar a próxima, mas diversos métodos requerem todo o conjunto de dados a fim de estimar seus parâmetros e, portanto, não são adequados para aprendizagem em tempo real. Além disso, muitas abordagens sofrem com a denominada maldição da dimensionalidade (BELLMAN, 1961) e não conseguem lidar com dados de entrada de alta dimensionalidade. Para superar os problemas descritos anteriormente, este trabalho propõe um novo modelo de rede neural probabilístico e incremental, denominado Local Projection Incremental Gaussian Mixture Network (LP-IGMN), que é capaz de realizar aprendizagem perpétua com dados de alta dimensionalidade, ou seja, ele pode aprender continuamente considerando a estabilidade dos parâmetros do modelo atual e automaticamente ajustar sua topologia levando em conta a fronteira do subespaço encontrado por cada neurônio oculto. O método proposto pode encontrar o subespaço intrísico onde os dados se localizam, o qual é denominado de subespaço principal. Ortogonal ao subespaço principal, existem as dimensões que são ruidosas ou que carregam pouca informação, ou seja, com pouca variância, e elas são descritas por um único parâmetro estimado. Portanto, LP-IGMN é robusta a diferentes fontes de dados e pode lidar com grande número de variáveis ruidosas e/ou irrelevantes nos dados medidos. Para avaliar a LP-IGMN nós realizamos diversos experimentos usando conjunto de dados simulados e reais. Demonstramos ainda diversas aplicações do nosso método em tarefas de reconhecimento de imagens. Os resultados mostraram que o desempenho da LP-IGMN é competitivo, e geralmente superior, com outras abordagens do estado da arte, e que ela pode ser utilizada com sucesso em aplicações que requerem aprendizagem perpétua em espaços de alta dimensionalidade. / Nowadays several sensory systems provide data in ows and these measured observations are frequently high-dimensional, i.e., the number of measured variables is large, and the observations are arriving in a sequence. This is in particular the case of robot vision systems. Unsupervised and supervised learning with such data streams is challenging, because the algorithm should be capable of learning from each observation and then discard it before considering the next one, but several methods require the whole dataset in order to estimate their parameters and, therefore, are not suitable for online learning. Furthermore, many approaches su er with the so called curse of dimensionality (BELLMAN, 1961) and can not handle high-dimensional input data. To overcome the problems described above, this work proposes a new probabilistic and incremental neural network model, called Local Projection Incremental Gaussian Mixture Network (LP-IGMN), which is capable to perform life-long learning with high-dimensional data, i.e., it can continuously learn considering the stability of the current model's parameters and automatically adjust its topology taking into account the subspace's boundary found by each hidden neuron. The proposed method can nd the intrinsic subspace where the data lie, which is called the principal subspace. Orthogonal to the principal subspace, there are the dimensions that are noisy or carry little information, i.e., with small variance, and they are described by a single estimated parameter. Therefore, LP-IGMN is robust to di erent sources of data and can deal with large number of noise and/or irrelevant variables in the measured data. To evaluate LP-IGMN we conducted several experiments using simulated and real datasets. We also demonstrated several applications of our method in image recognition tasks. The results have shown that the LP-IGMN performance is competitive, and usually superior, with other stateof- the-art approaches, and it can be successfully used in applications that require life-long learning in high-dimensional spaces.
5

Probabilistic incremental learning for image recognition : modelling the density of high-dimensional data

Carvalho, Edigleison Francelino January 2014 (has links)
Atualmente diversos sistemas sensoriais fornecem dados em fluxos e essas observações medidas são frequentemente de alta dimensionalidade, ou seja, o número de variáveis medidas é grande, e as observações chegam em sequência. Este é, em particular, o caso de sistemas de visão em robôs. Aprendizagem supervisionada e não-supervisionada com esses fluxos de dados é um desafio, porque o algoritmo deve ser capaz de aprender com cada observação e depois descartá-la antes de considerar a próxima, mas diversos métodos requerem todo o conjunto de dados a fim de estimar seus parâmetros e, portanto, não são adequados para aprendizagem em tempo real. Além disso, muitas abordagens sofrem com a denominada maldição da dimensionalidade (BELLMAN, 1961) e não conseguem lidar com dados de entrada de alta dimensionalidade. Para superar os problemas descritos anteriormente, este trabalho propõe um novo modelo de rede neural probabilístico e incremental, denominado Local Projection Incremental Gaussian Mixture Network (LP-IGMN), que é capaz de realizar aprendizagem perpétua com dados de alta dimensionalidade, ou seja, ele pode aprender continuamente considerando a estabilidade dos parâmetros do modelo atual e automaticamente ajustar sua topologia levando em conta a fronteira do subespaço encontrado por cada neurônio oculto. O método proposto pode encontrar o subespaço intrísico onde os dados se localizam, o qual é denominado de subespaço principal. Ortogonal ao subespaço principal, existem as dimensões que são ruidosas ou que carregam pouca informação, ou seja, com pouca variância, e elas são descritas por um único parâmetro estimado. Portanto, LP-IGMN é robusta a diferentes fontes de dados e pode lidar com grande número de variáveis ruidosas e/ou irrelevantes nos dados medidos. Para avaliar a LP-IGMN nós realizamos diversos experimentos usando conjunto de dados simulados e reais. Demonstramos ainda diversas aplicações do nosso método em tarefas de reconhecimento de imagens. Os resultados mostraram que o desempenho da LP-IGMN é competitivo, e geralmente superior, com outras abordagens do estado da arte, e que ela pode ser utilizada com sucesso em aplicações que requerem aprendizagem perpétua em espaços de alta dimensionalidade. / Nowadays several sensory systems provide data in ows and these measured observations are frequently high-dimensional, i.e., the number of measured variables is large, and the observations are arriving in a sequence. This is in particular the case of robot vision systems. Unsupervised and supervised learning with such data streams is challenging, because the algorithm should be capable of learning from each observation and then discard it before considering the next one, but several methods require the whole dataset in order to estimate their parameters and, therefore, are not suitable for online learning. Furthermore, many approaches su er with the so called curse of dimensionality (BELLMAN, 1961) and can not handle high-dimensional input data. To overcome the problems described above, this work proposes a new probabilistic and incremental neural network model, called Local Projection Incremental Gaussian Mixture Network (LP-IGMN), which is capable to perform life-long learning with high-dimensional data, i.e., it can continuously learn considering the stability of the current model's parameters and automatically adjust its topology taking into account the subspace's boundary found by each hidden neuron. The proposed method can nd the intrinsic subspace where the data lie, which is called the principal subspace. Orthogonal to the principal subspace, there are the dimensions that are noisy or carry little information, i.e., with small variance, and they are described by a single estimated parameter. Therefore, LP-IGMN is robust to di erent sources of data and can deal with large number of noise and/or irrelevant variables in the measured data. To evaluate LP-IGMN we conducted several experiments using simulated and real datasets. We also demonstrated several applications of our method in image recognition tasks. The results have shown that the LP-IGMN performance is competitive, and usually superior, with other stateof- the-art approaches, and it can be successfully used in applications that require life-long learning in high-dimensional spaces.
6

Effects of positive evidence, indirect negative evidence and form-function transparency on second language acquisition : evidence from L2 Chinese and L2 Thai

Prawatmuang, Woramon January 2018 (has links)
This study investigates second language (L2) acquisition of word orders and markers of collectivity in Chinese and Thai. One of the differences between Chinese and Thai is that Chinese nominal phrases appear with a “numeral + classifier + noun” word order while Thai phrases appear as “noun + numeral + classifier”. Another difference is that men, the Chinese collective marker, cannot be used with nouns referring to animals or indefinite nouns, while phûak, the Thai collective marker, can do so. Based on the cross-linguistic differences, an empirical study was conducted to answer whether Thai learners of Chinese and Chinese learners of Thai would be able to acquire target language (TL) structures that are different from those in their native language (L1) and whether they could reject incorrect TL structures. One hundred and forty-four participants were recruited to complete an acceptability judgment task and a self-paced reading task. It is found that both Chinese and Thai learners could perform native-like in their acceptance of TL word orders since early stages of acquisition. However, it took them until an advanced level to be able to completely reject incorrect TL word orders that resembled structures in their L1. Thai learners also faced difficulty rejecting the use of men with animal and indefinite nouns in their L2 Chinese. In contrast, Chinese learners tended to be successful in their acquisition of phûak. The results are interpreted in terms of roles of positive evidence and form-function transparency. In general, L2 learners tend to acquire a TL structure earlier when they can receive positive evidence in TL input and when a form-function connection of the structure is transparent. Nonetheless, these factors do not have an absolute effect on acquisition outcome since some learners may be able to use a probabilistic learning strategy to successfully acquire L2 knowledge even when positive evidence is unavailable.
7

Variational aleatoric uncertainty calibration in neural regression

Bhatt, Dhaivat 07 1900 (has links)
Des mesures de confiance calibrées et fiables sont un prérequis pour la plupart des systèmes de perception robotique car elles sont nécessaires aux modules de fusion de capteurs et de planification qui interviennent plus en aval. Cela est particulièrement vrai dans le cas d’applications où la sécurité est essentielle, comme les voitures à conduite autonome. Dans le contexte de l’apprentissage profond, l’incertitude prédictive est classée en incertitude épistémique et incertitude aléatoire. Il existe également une incertitude distributionnelle associée aux données hors distribution. L’incertitude aléatoire représente l’ambiguïté inhérente aux données d’entrée et est généralement irréductible par nature. Plusieurs méthodes existent pour estimer cette incertitude au moyen de structures de réseau modifiées ou de fonctions de perte. Cependant, en général, ces méthodes manquent de calibration, ce qui signifie que les incertitudes estimées ne représentent pas fidèlement l’incertitude des données empiriques. Les approches actuelles pour calibrer l’incertitude aléatoire nécessitent soit un "ensemble de données de calibration", soit de modifier les paramètres du modèle après l’apprentissage. De plus, de nombreuses approches ajoutent des opérations supplémentaires lors de l’inférence. Pour pallier à ces problèmes, nous proposons une méthode simple et efficace d’entraînement d’un régresseur neuronal calibré, conçue à partir des premiers principes de la calibration. Notre idée maîtresse est que la calibration ne peut être réalisée qu’en imposant des contraintes sur plusieurs exemples, comme ceux d’un mini-batch, contrairement aux approches existantes qui n’imposent des contraintes que sur la base d’un échantillon. En obligeant la distribution des sorties du régresseur neuronal (la distribution de la proposition) à ressembler à unedistribution cible en minimisant une divergence f , nous obtenons des modèles nettement mieuxcalibrés par rapport aux approches précédentes. Notre approche, f -Cal, est simple à mettre en œuvre ou à ajouter aux modèles existants et surpasse les méthodes de calibration existantes dansles tâches réelles à grande échelle de détection d’objets et d’estimation de la profondeur. f -Cal peut être mise en œuvre en 10-15 lignes de code PyTorch et peut être intégrée à n’importe quel régresseur neuronal probabiliste, de façon peu invasive. Nous explorons également l’estimation de l’incertitude distributionnelle pour la détection d’objets, et employons des méthodes conçues pour les systèmes de classification. Nous établissons un problème d’arrière-plan hors distribution qui entrave l’applicabilité des méthodes d’incertitude distributionnelle dans la détection d’objets. / Calibrated and reliable confidence measures are a prerequisite for most robotics perception systems since they are needed by sensor fusion and planning components downstream. This is particularly true in the case of safety-critical applications such as self-driving cars. In the context of deep learning, the sources of predictive uncertainty are categorized into epistemic and aleatoric uncertainty. There is also distributional uncertainty associated with out of distribution data. Epistemic uncertainty, also known as knowledge uncertainty, arises because of noise in the model structure and parameters, and can be reduced with more labeled data. Aleatoric uncertainty represents the inherent ambiguity in the input data and is generally irreducible in nature. Several methods exist for estimating aleatoric uncertainty through modified network structures or loss functions. However, in general, these methods lack calibration, meaning that the estimated uncertainties do not represent the empirical data uncertainty accurately. Current approaches to calibrate aleatoric uncertainty either require a held out calibration dataset or to modify the model parameters post-training. Moreover, many approaches add extra computation during inference time. To alleviate these issues, this thesis proposes a simple and effective method for training a calibrated neural regressor, designed from the first principles of calibration. Our key insight is that calibration can be achieved by imposing constraints across multiple examples, such as those in a mini-batch, as opposed to existing approaches that only impose constraints on a per-sample basis. By enforcing the distribution of outputs of the neural regressor (the proposal distribution) to resemble a target distribution by minimizing an f-divergence, we obtain significantly better-calibrated models compared to prior approaches. Our approach, f-Cal, is simple to implement or add to existing models and outperforms existing calibration methods on the large-scale real-world tasks of object detection and depth estimation. f-Cal can be implemented in 10-15 lines of PyTorch code, and can be integrated with any probabilistic neural regressor in a minimally invasive way. This thesis also explores the estimation of distributional uncertainty for object detection, and employ methods designed for classification setups. In particular, we attempt to detect out of distribution (OOD) samples, examples which are not part of training data distribution. I establish a background-OOD problem which hampers applicability of distributional uncertainty methods in object detection specifically.

Page generated in 0.1121 seconds