1 |
Learning Visual Feature HierarchiesScalzo, Fabien 04 December 2007 (has links)
Cette thèse porte sur la reconnaissance visuelle d'objets, un
domaine qui reste un défi majeur en vision par ordinateur. En
effet, malgré plus de vingt années de recherche, de nombreuses
facettes du problème restent a ce jour irrésolues. La
conception d'un système de reconnaissance d'objets repose
essentiellement sur trois aspects: la représentation, la détection
et l'apprentissage automatique.
La principale contribution de cette thèse est de proposer un
système générique pour la représentation statistique des
caractéristiques visuelles et leur détection dans les images. Le
modèle proposé combine différents concepts récemment
proposés en vision par ordinateur, machine learning et
neurosciences: a savoir les relations spatiales entre des caractéristiques
visuelles, les modèles graphiques ainsi que les hiérarchies de
cellules complexes. Le résultat de cette association prend la forme
d'une hiérarchie de classes de caractéristiques visuelles. Son
principal intérêt est de fournir un modèle représentant, à la
fois, les aspects visuels locaux et globaux, en utilisant la structure
géométrique et l'apparence des objets. L'exploitation des
modèles graphiques offre un cadre probabiliste pour la
représentation des hiérarchies et leur utilisation pour
l'inférence. Un algorithme d'échange de messages récemment
proposé (NBP) est utilisé pour inférer la position des
caractéristiques dans les images.
Lors de l'apprentissage, les hiérarchies sont construites de
manière incrémentale en partant des caractéristiques de
bas-niveaux. L'algorithme est basé sur l'analyse des
co-occurrences. Il permet d'estimer la structure et les paramètres
des hiérarchies.
Les performances offertes par ce nouveau système sont évaluées
sur différentes bases de données d'objets de difficulté
croissante. Par ailleurs, un survol de l'état de l'art concernant
les méthodes de reconnaissances d'objets et les détecteurs de
caractéristiques offre une vue globale du domaine.
|
2 |
Causal Reasoning in Equivalence ClassesAmin Jaber (14227610) 07 December 2022 (has links)
<p>Causality is central to scientific inquiry across many disciplines including epidemiology, medicine, and economics, to name a few. Researchers are usually interested not only in knowing how two events are correlated, but also in whether one causes the other and, if so, how. In general, the scientific practice seeks not just a surface description of the observed data, but rather deeper explanations, such as predicting the effects of interventions. The answer to such questions does not lie in the data alone and requires a qualitative understanding of the underlying data-generating process; a knowledge that is articulated in a causal diagram.</p>
<p>And yet, delineating the true, underlying causal diagram requires knowledge and assumptions that are usually not available in many non-trivial and large-scale situations. Hence, this dissertation develops necessary theory and algorithms towards realizing a data-driven framework for causal inference. More specifically, this work provides fundamental treatments of the following research questions:</p>
<p><br></p>
<p><strong>Effect Identification under Markov Equivalence.</strong> One common task in many data sciences applications is to answer questions about the effect of new interventions, like: 'what would happen to <em>Y</em> while observing <em>Z=z</em> if we force <em>X</em> to take the value <em>x</em>?'. Formally, this is known as <em>causal effect identification</em>, where the goal is to determine whether a post-interventional distribution is computable from the combination of an observational distribution and assumptions about the underlying domain represented by a causal diagram. In this dissertation, we assume as the input of the task a less informative structure known as a partial ancestral graph (PAG), which represents a Markov equivalence class of causal diagrams, learnable from observational data. We develop tools and algorithms for this relaxed setting and characterize identifiable effects under necessary and sufficient conditions.</p>
<p><br></p>
<p><strong>Causal Discovery from Interventions.</strong> A causal diagram imposes constraints on the corresponding generated data; conditional independences are one such example. Given a mixture of observational and experimental data, the goal is to leverage the constraints imprinted in the data to infer the set of causal diagrams that are compatible with such constraints. In this work, we consider soft interventions, such that the mechanism of an intervened variable is modified without fully eliminating the effect of its direct causes, and investigate two settings where the targets of the interventions could be known or unknown to the data scientist. Accordingly, we introduce the first general graphical characterizations to test whether two causal diagrams are indistinguishable given the constraints in the available data. We also develop algorithms that, given a mixture of observational and interventional data, learn a representation of the equivalence class.</p>
|
3 |
Experimental neuropsychological tests of feature ambiguity, attention and structural learning : associations with white matter microstructural integrity in elderly with amnesic and vascular mild cognitive impairment.Young, Bob Neill January 2014 (has links)
Mild cognitive impairment (MCI) is a transition phase between normal aging and Alzheimer’s disease. Individuals with MCI show impairment in cognition as well as corresponding damage to areas of their brain. Performance on tasks such as discriminating objects with ambiguous features has been associated with damage to the perirhinal cortex, while scenes with structural (spatial) elements have been associated with damage to the hippocampus. In addition, attention is regarded as one of the first non-memory domains to decline in MCI. A relatively new MRI technique called diffusion tensor imaging (DTI) is sensitive to white matter microstructural integrity and has been associated with changes due to cognitive decline. 18 MCI (14 amnesic, 4 vascular) and 12 healthy matched controls were assessed in feature ambiguity, attention and structural learning to assess associated deficits in MCI. Associations with white matter microstructural integrity were then investigated. The MCI groups were discovered to perform worse than controls on the test of structural learning. In addition, altered attention networks were found in MCI and were associated with white matter microstructural integrity. No significant differences were found for feature ambiguity. These findings suggest there may be specific damage to the hippocampus while the perirhinal cortex may be preserved in MCI. Furthermore, dysfunction in attention was found to be associated with white matter microstructural integrity. These experimental tests may be useful in assessing dysfunction in MCI and identifying degeneration in white matter microstructural integrity. Further studies with larger sample sizes are needed to validate these findings.
|
4 |
Rede Bayesiana empregada no gerenciamento da saúde dos sistemas na computação em nuvemAlves, Renato dos Santos 10 August 2016 (has links)
Submitted by Bruna Rodrigues (bruna92rodrigues@yahoo.com.br) on 2016-10-21T11:00:36Z
No. of bitstreams: 1
DissRSA.pdf: 2940714 bytes, checksum: 9af799d998ad9646a6f38b0d6e9c382a (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-11-08T18:44:27Z (GMT) No. of bitstreams: 1
DissRSA.pdf: 2940714 bytes, checksum: 9af799d998ad9646a6f38b0d6e9c382a (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-11-08T18:44:32Z (GMT) No. of bitstreams: 1
DissRSA.pdf: 2940714 bytes, checksum: 9af799d998ad9646a6f38b0d6e9c382a (MD5) / Made available in DSpace on 2016-11-08T18:44:39Z (GMT). No. of bitstreams: 1
DissRSA.pdf: 2940714 bytes, checksum: 9af799d998ad9646a6f38b0d6e9c382a (MD5)
Previous issue date: 2016-08-10 / Não recebi financiamento / Cloud computing is a convenient computing model, because it allows the ubiquity with on-demand access to a set of configurable and shared features, that can be rapidly provisioned and made available with minimal effort or interaction with the service provider. IaaS is a different way to deliver cloud computing, where infrastructure servers, networking systems, storage, and all the necessary environment
for the operating system to run the application are hired as services. Meanwhile, traditional companies still have doubts in relation to the transferring of their data outside of the limits of the corporation. The health of cloud computing systems is fundamental to the business, given the complexity of the systems it is difficult to ensure that all services and resources will work properly. In order to ensure a more appropriate management of the systems and services in the cloud, an architecture is proposed. The architecture has been modularized through specializing monitoring functions, data mining, and inference with Bayesian network. In this
architecture are essential records of event monitoring systems and computing resources
because the recorded data is mined to identify fault patterns a given system after the result of one or more events in the environment. For mining the monitoring data we proposed two algorithms, one for performing preprocessing of data and another to perform data transformation. As a data mining product obtained, data sets that were the input to create a Bayesian network. Through structural and parametric learning algorithms Bayesinas networks for each systems and services offered by cloud computing were created. The Bayesian network is intended to assist in decision making with prevention, prediction, error correction in systems and
services, allowing to manage the health and performance of the most appropriate way systems. To check the compliance of the fault diagnosis of this architecture, we validate accuracy of inference of Bayesian network with cross-validation method using data sets generated by monitoring systems and services. / A computação em nuvem é um modelo de computação conveniente, pois permite a ubiquidade, com acesso sob demanda a um conjunto de recursos configuráveis e compartilhados, que podem ser rapidamente provisionados e disponibilizados com o mínimo de esforço ou interação com o fornecedor do serviço. IaaS é uma maneira diferente de entregar computação em nuvem, onde a infraestrutura de servidores, sistemas de rede, armazenamento e todo o ambiente necessário para o funcionamento do sistema operacional até aplicação são contratados como serviços. Entretanto, empresas tradicionais ainda possuem dúvidas com relação à transferência de seus dados para fora dos limites da corporação. A saúde de sistemas em computação em nuvem é algo fundamental para o negócio, e dada a
complexidade dos sistemas é difícil garantir que todos os serviços e recursos funcionem adequadamente. A fim de garantir um gerenciamento mais adequado da saúde dos sistema e serviços na nuvem, propôs-se nesse trabalho uma arquitetura de diagnóstico de saúde de sistema de nuvem. A arquitetura foi modularizada, especializando funções de monitoramento, mineração de dados e inferência com rede Bayesiana. Nessa arquitetura, são fundamentais os registros de eventos de
monitoramento dos sistemas e recursos computacionais, pois os dados registrados são minerados para identificar padrões de falhas. Para mineração dos dados de monitoramento foram propostos dois algoritmos: um para realizar a tarefa de pré- processamento dos dados e outro para realizar a transformação dos dados. Como produto da mineração dos dados, foram obtidos conjuntos de dados que foram o insumo para criar a rede Bayesiana. Por meio de algoritmos de aprendizagem estrutural e paramétrica foram criadas redes Bayesinas para cada sistema e disponibilizados por meio da computação em nuvem. A rede Bayesiana tem o objetivo de auxiliar na tomada de decis˜ao com prevenção, previsão, correção de falhas nos sistemas e serviços, permitindo assim gerenciar a saúde e o desempenho dos sistemas de forma mais adequada. Para verificar a aderência da arquitetura ao diagnóstico de falhas, validou-se a precisão de inferência da rede Bayesiana com o método de validação cruzada.
|
5 |
Enkele tegnieke vir die ontwikkeling en benutting van etiketteringhulpbronne vir hulpbronskaars tale / A.C. GriebenowGriebenow, Annick January 2015 (has links)
Because the development of resources in any language is an expensive process, many languages, including the indigenous languages of South Africa, can be classified as being resource scarce, or lacking in tagging resources. This study investigates and applies techniques and methodologies for optimising the use of available resources and improving the accuracy of a tagger using Afrikaans as resource-scarce language and aims to i) determine whether combination techniques can be effectively applied to improve the accuracy of a tagger for Afrikaans, and ii) determine whether structural semi-supervised learning can be effectively applied to improve the accuracy of a supervised learning tagger for Afrikaans. In order to realise the first aim, existing methodologies for combining classification algorithms are investigated. Four taggers, trained using MBT, SVMlight, MXPOST and TnT respectively, are then combined into a combination tagger using weighted voting. Weights are calculated by means of total precision, tag precision and a combination of precision and recall. Although the combination of taggers does not consistently lead to an error rate reduction with regard to the baseline, it manages to achieve an error rate reduction of up to 18.48% in some cases. In order to realise the second aim, existing semi-supervised learning algorithms, with specific focus on structural semi-supervised learning, are investigated. Structural semi-supervised learning is implemented by means of the SVD-ASO-algorithm, which attempts to extract the shared structure of untagged data using auxiliary problems before training a tagger. The use of untagged data during the training of a tagger leads to an error rate reduction with regard to the baseline of 1.67%. Even though the error rate reduction does not prove to be statistically significant in all cases, the results show that it is possible to improve the accuracy in some cases. / MSc (Computer Science), North-West University, Potchefstroom Campus, 2015
|
6 |
Enkele tegnieke vir die ontwikkeling en benutting van etiketteringhulpbronne vir hulpbronskaars tale / A.C. GriebenowGriebenow, Annick January 2015 (has links)
Because the development of resources in any language is an expensive process, many languages, including the indigenous languages of South Africa, can be classified as being resource scarce, or lacking in tagging resources. This study investigates and applies techniques and methodologies for optimising the use of available resources and improving the accuracy of a tagger using Afrikaans as resource-scarce language and aims to i) determine whether combination techniques can be effectively applied to improve the accuracy of a tagger for Afrikaans, and ii) determine whether structural semi-supervised learning can be effectively applied to improve the accuracy of a supervised learning tagger for Afrikaans. In order to realise the first aim, existing methodologies for combining classification algorithms are investigated. Four taggers, trained using MBT, SVMlight, MXPOST and TnT respectively, are then combined into a combination tagger using weighted voting. Weights are calculated by means of total precision, tag precision and a combination of precision and recall. Although the combination of taggers does not consistently lead to an error rate reduction with regard to the baseline, it manages to achieve an error rate reduction of up to 18.48% in some cases. In order to realise the second aim, existing semi-supervised learning algorithms, with specific focus on structural semi-supervised learning, are investigated. Structural semi-supervised learning is implemented by means of the SVD-ASO-algorithm, which attempts to extract the shared structure of untagged data using auxiliary problems before training a tagger. The use of untagged data during the training of a tagger leads to an error rate reduction with regard to the baseline of 1.67%. Even though the error rate reduction does not prove to be statistically significant in all cases, the results show that it is possible to improve the accuracy in some cases. / MSc (Computer Science), North-West University, Potchefstroom Campus, 2015
|
7 |
Estudo Comparativo de M?tricas de Pontua??o para Aprendizagem Estrutural de Redes BayesianasPifer, Aderson Cleber 30 August 2006 (has links)
Made available in DSpace on 2014-12-17T14:56:21Z (GMT). No. of bitstreams: 1
AdersonCP.pdf: 441948 bytes, checksum: 3ac355b4df6f67d2c5c0a9bb8f35c95a (MD5)
Previous issue date: 2006-08-30 / Bayesian networks are powerful tools as they represent probability distributions as graphs. They work with uncertainties of real systems. Since last decade there is a special interest in learning network structures from data. However learning the best network structure is a NP-Hard problem, so many heuristics algorithms to generate network structures from data were created. Many of these algorithms use score metrics to generate the network model. This thesis compare three of most used score metrics. The K-2 algorithm and two pattern benchmarks, ASIA and ALARM, were used to carry out the comparison. Results show that score metrics with hyperparameters that strength the tendency to select simpler network structures are better than score metrics with weaker tendency to select simpler network structures for both metrics (Heckerman-Geiger and modified MDL). Heckerman-Geiger Bayesian score metric works better than MDL with large datasets and MDL works better than Heckerman-Geiger with small datasets. The modified MDL gives similar results to Heckerman-Geiger for large datasets and close results to MDL for small datasets with stronger tendency to select simpler network structures / Redes Bayesianas s?o poderosas ferramentas de representa??o gr?fica de distribui??es de probabilidade. Tais redes manipulam incertezas existentes em sistemas do mundo real. A partir da ?ltima d?cada, especial interesse no aprendizado de sua estrutura a partir de um conjunto de dados. Entretanto, o aprendizado da estrutura ? um problema NP-Dif?cil, o que gerou a cria??o de Algoritmos heur?sticos de busca. Muitos desses Algoritmos s?o baseados em m?tricas de pontua??o para estimar o modelo. Este trabalho procura comparar tr?s das m?tricas mais utilizadas. Para gerar os resul tados foram utilizadas as redes ASIA e ALARM, que s?o dois dos benchmarks padr?es e o Algoritmo de busca K-2. A m?trica Bayesiana Heckerman-Geiger com hiperpar?metros que dificultam a gera??o de arestas apresentam melhores resultados que ?quelas que flexibilizam a gera??o de arestas, acontecendo o mesmo com a m?trica MDL modificada. A compara??o das duas m?tricas mostrou que a m?trica Bayesiana ? superior ? m?trica MDL com grandes conjuntos de dados e inferior, caso contr?rio. A modifica??o na m?trica MDL resultou em estruturas mais pr?ximas ?s apresentadas pela MDL para um conjunto reduzido de dados e mais pr?ximas ? Heckerman-Geiger para um grande conjunto de dados, quando seus par?metros restrigem a cria??o de arestas
|
Page generated in 0.1201 seconds