• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 15
  • 15
  • 15
  • 15
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

\"Processamento e análise de imagens para medição de vícios de refração ocular\" / Image Processing and Analysis for Measuring Ocular Refraction Errors

Valerio Netto, Antonio 18 August 2003 (has links)
Este trabalho apresenta um sistema computacional que utiliza técnicas de Aprendizado de Máquina (AM) para auxiliar o diagnóstico oftalmológico. Trata-se de um sistema de medidas objetivas e automáticas dos principais vícios de refração ocular, astigmatismo, hipermetropia e miopia. O sistema funcional desenvolvido aplica técnicas convencionais de processamento a imagens do olho humano fornecidas por uma técnica de aquisição chamada Hartmann-Shack (HS), ou Shack-Hartmann (SH), com o objetivo de extrair e enquadrar a região de interesse e remover ruídos. Em seguida, vetores de características são extraídos dessas imagens pela técnica de transformada wavelet de Gabor e, posteriormente, analisados por técnicas de AM para diagnosticar os possíveis vícios refrativos presentes no globo ocular representado. Os resultados obtidos indicam a potencialidade dessa abordagem para a interpretação de imagens de HS de forma que, futuramente, outros problemas oculares possam ser detectados e medidos a partir dessas imagens. Além da implementação de uma nova abordagem para a medição dos vícios refrativos e da introdução de técnicas de AM na análise de imagens oftalmológicas, o trabalho contribui para a investigação da utilização de Máquinas de Vetores Suporte e Redes Neurais Artificiais em sistemas de Entendimento/Interpretação de Imagens (Image Understanding). O desenvolvimento deste sistema permite verificar criticamente a adequação e limitações dessas técnicas para a execução de tarefas no campo do Entendimento/Interpretação de Imagens em problemas reais. / This work presents a computational system that uses Machine Learning (ML) techniques to assist in ophthalmological diagnosis. The system developed produces objective and automatic measures of ocular refraction errors, namely astigmatism, hypermetropia and myopia from functional images of the human eye acquired with a technique known as Hartmann-Shack (HS), or Shack-Hartmann (SH). Image processing techniques are applied to these images in order to remove noise and extract the regions of interest. The Gabor wavelet transform technique is applied to extract feature vectors from the images, which are then input to ML techniques that output a diagnosis of the refractive errors in the imaged eye globe. Results indicate that the proposed approach creates interesting possibilities for the interpretation of HS images, so that in the future other types of ocular diseases may be detected and measured from the same images. In addition to implementing a novel approach for measuring ocular refraction errors and introducing ML techniques for analyzing ophthalmological images, this work investigates the use of Artificial Neural Networks and Support Vector Machines (SVMs) for tasks in Image Understanding. The description of the process adopted for developing this system can help in critically verifying the suitability and limitations of such techniques for solving Image Understanding tasks in \"real world\" problems.
12

Contributions à l’apprentissage automatique pour l’analyse d’images cérébrales anatomiques / Contributions to statistical learning for structural neuroimaging data

Cuingnet, Rémi 29 March 2011 (has links)
L'analyse automatique de différences anatomiques en neuroimagerie a de nombreuses applications pour la compréhension et l'aide au diagnostic de pathologies neurologiques. Récemment, il y a eu un intérêt croissant pour les méthodes de classification telles que les machines à vecteurs supports pour dépasser les limites des méthodes univariées traditionnelles. Cette thèse a pour thème l'apprentissage automatique pour l'analyse de populations et la classification de patients en neuroimagerie. Nous avons tout d'abord comparé les performances de différentes stratégies de classification, dans le cadre de la maladie d'Alzheimer à partir d'images IRM anatomiques de 509 sujets de la base de données ADNI. Ces différentes stratégies prennent insuffisamment en compte la distribution spatiale des \textit{features}. C'est pourquoi nous proposons un cadre original de régularisation spatiale et anatomique des machines à vecteurs supports pour des données de neuroimagerie volumiques ou surfaciques, dans le formalisme de la régularisation laplacienne. Cette méthode a été appliquée à deux problématiques cliniques: la maladie d'Alzheimer et les accidents vasculaires cérébraux. L'évaluation montre que la méthode permet d'obtenir des résultats cohérents anatomiquement et donc plus facilement interprétables, tout en maintenant des taux de classification élevés. / Brain image analyses have widely relied on univariate voxel-wise methods. In such analyses, brain images are first spatially registered to a common stereotaxic space, and then mass univariate statistical tests are performed in each voxel to detect significant group differences. However, the sensitivity of theses approaches is limited when the differences involve a combination of different brain structures. Recently, there has been a growing interest in support vector machines methods to overcome the limits of these analyses.This thesis focuses on machine learning methods for population analysis and patient classification in neuroimaging. We first evaluated the performances of different classification strategies for the identification of patients with Alzheimer's disease based on T1-weighted MRI of 509 subjects from the ADNI database. However, these methods do not take full advantage of the spatial distribution of the features. As a consequence, the optimal margin hyperplane is often scattered and lacks spatial coherence, making its anatomical interpretation difficult. Therefore, we introduced a framework to spatially regularize support vector machines for brain image analysis based on Laplacian regularization operators. The proposed framework was then applied to the analysis of stroke and of Alzheimer's disease. The results demonstrated that the proposed classifier generates less-noisy and consequently more interpretable feature maps with no loss of classification performance.
13

Exploring variabilities through factor analysis in automatic acoustic language recognition

Verdet, Florian 05 September 2011 (has links) (PDF)
Language Recognition is the problem of discovering the language of a spoken definitionutterance. This thesis achieves this goal by using short term acoustic information within a GMM-UBM approach.The main problem of many pattern recognition applications is the variability of problemthe observed data. In the context of Language Recognition (LR), this troublesomevariability is due to the speaker characteristics, speech evolution, acquisition and transmission channels.In the context of Speaker Recognition, the variability problem is solved by solutionthe Joint Factor Analysis (JFA) technique. Here, we introduce this paradigm toLanguage Recognition. The success of JFA relies on several assumptions: The globalJFA assumption is that the observed information can be decomposed into a universalglobal part, a language-dependent part and the language-independent variabilitypart. The second, more technical assumption consists in the unwanted variability part to be thought to live in a low-dimensional, globally defined subspace. In this work, we analyze how JFA behaves in the context of a GMM-UBM LR framework. We also introduce and analyze its combination with Support Vector Machines(SVMs).The first JFA publications put all unwanted information (hence the variability) improvemen tinto one and the same component, which is thought to follow a Gaussian distribution.This handles diverse kinds of variability in a unique manner. But in practice,we observe that this hypothesis is not always verified. We have for example thecase, where the data can be divided into two clearly separate subsets, namely datafrom telephony and from broadcast sources. In this case, our detailed investigations show that there is some benefit of handling the two kinds of data with two separatesystems and then to elect the output score of the system, which corresponds to the source of the testing utterance.For selecting the score of one or the other system, we need a channel source related analyses detector. We propose here different novel designs for such automatic detectors.In this framework, we show that JFA's variability factors (of the subspace) can beused with success for detecting the source. This opens the interesting perspectiveof partitioning the data into automatically determined channel source categories,avoiding the need of source-labeled training data, which is not always available.The JFA approach results in up to 72% relative cost reduction, compared to the overall resultsGMM-UBM baseline system. Using source specific systems followed by a scoreselector, we achieve 81% relative improvement.
14

\"Processamento e análise de imagens para medição de vícios de refração ocular\" / Image Processing and Analysis for Measuring Ocular Refraction Errors

Antonio Valerio Netto 18 August 2003 (has links)
Este trabalho apresenta um sistema computacional que utiliza técnicas de Aprendizado de Máquina (AM) para auxiliar o diagnóstico oftalmológico. Trata-se de um sistema de medidas objetivas e automáticas dos principais vícios de refração ocular, astigmatismo, hipermetropia e miopia. O sistema funcional desenvolvido aplica técnicas convencionais de processamento a imagens do olho humano fornecidas por uma técnica de aquisição chamada Hartmann-Shack (HS), ou Shack-Hartmann (SH), com o objetivo de extrair e enquadrar a região de interesse e remover ruídos. Em seguida, vetores de características são extraídos dessas imagens pela técnica de transformada wavelet de Gabor e, posteriormente, analisados por técnicas de AM para diagnosticar os possíveis vícios refrativos presentes no globo ocular representado. Os resultados obtidos indicam a potencialidade dessa abordagem para a interpretação de imagens de HS de forma que, futuramente, outros problemas oculares possam ser detectados e medidos a partir dessas imagens. Além da implementação de uma nova abordagem para a medição dos vícios refrativos e da introdução de técnicas de AM na análise de imagens oftalmológicas, o trabalho contribui para a investigação da utilização de Máquinas de Vetores Suporte e Redes Neurais Artificiais em sistemas de Entendimento/Interpretação de Imagens (Image Understanding). O desenvolvimento deste sistema permite verificar criticamente a adequação e limitações dessas técnicas para a execução de tarefas no campo do Entendimento/Interpretação de Imagens em problemas reais. / This work presents a computational system that uses Machine Learning (ML) techniques to assist in ophthalmological diagnosis. The system developed produces objective and automatic measures of ocular refraction errors, namely astigmatism, hypermetropia and myopia from functional images of the human eye acquired with a technique known as Hartmann-Shack (HS), or Shack-Hartmann (SH). Image processing techniques are applied to these images in order to remove noise and extract the regions of interest. The Gabor wavelet transform technique is applied to extract feature vectors from the images, which are then input to ML techniques that output a diagnosis of the refractive errors in the imaged eye globe. Results indicate that the proposed approach creates interesting possibilities for the interpretation of HS images, so that in the future other types of ocular diseases may be detected and measured from the same images. In addition to implementing a novel approach for measuring ocular refraction errors and introducing ML techniques for analyzing ophthalmological images, this work investigates the use of Artificial Neural Networks and Support Vector Machines (SVMs) for tasks in Image Understanding. The description of the process adopted for developing this system can help in critically verifying the suitability and limitations of such techniques for solving Image Understanding tasks in \"real world\" problems.
15

Exploring variabilities through factor analysis in automatic acoustic language recognition / Exploration par l'analyse factorielle des variabilités de la reconnaissance acoustique automatique de la langue / Erforschung durch Faktor-Analysis der Variabilitäten der automatischen akustischen Sprachen-Erkennung

Verdet, Florian 05 September 2011 (has links)
La problématique traitée par la Reconnaissance de la Langue (LR) porte sur la définition découverte de la langue contenue dans un segment de parole. Cette thèse se base sur des paramètres acoustiques de courte durée, utilisés dans une approche d’adaptation de mélanges de Gaussiennes (GMM-UBM). Le problème majeur de nombreuses applications du vaste domaine de la re- problème connaissance de formes consiste en la variabilité des données observées. Dans le contexte de la Reconnaissance de la Langue (LR), cette variabilité nuisible est due à des causes diverses, notamment les caractéristiques du locuteur, l’évolution de la parole et de la voix, ainsi que les canaux d’acquisition et de transmission. Dans le contexte de la reconnaissance du locuteur, l’impact de la variabilité solution peut sensiblement être réduit par la technique d’Analyse Factorielle (Joint Factor Analysis, JFA). Dans ce travail, nous introduisons ce paradigme à la Reconnaissance de la Langue. Le succès de la JFA repose sur plusieurs hypothèses. La première est que l’information observée est décomposable en une partie universelle, une partie dépendante de la langue et une partie de variabilité, qui elle est indépendante de la langue. La deuxième hypothèse, plus technique, est que la variabilité nuisible se situe dans un sous-espace de faible dimension, qui est défini de manière globale.Dans ce travail, nous analysons le comportement de la JFA dans le contexte d’un dispositif de LR du type GMM-UBM. Nous introduisons et analysons également sa combinaison avec des Machines à Vecteurs Support (SVM). Les premières publications sur la JFA regroupaient toute information qui est amélioration nuisible à la tâche (donc ladite variabilité) dans un seul composant. Celui-ci est supposé suivre une distribution Gaussienne. Cette approche permet de traiter les différentes sortes de variabilités d’une manière unique. En pratique, nous observons que cette hypothèse n’est pas toujours vérifiée. Nous avons, par exemple, le cas où les données peuvent être groupées de manière logique en deux sous-parties clairement distinctes, notamment en données de sources téléphoniques et d’émissions radio. Dans ce cas-ci, nos recherches détaillées montrent un certain avantage à traiter les deux types de données par deux systèmes spécifiques et d’élire comme score de sortie celui du système qui correspond à la catégorie source du segment testé. Afin de sélectionner le score de l’un des systèmes, nous avons besoin d’un analyses détecteur de canal source. Nous proposons ici différents nouveaux designs pour engendrées de tels détecteurs automatiques. Dans ce cadre, nous montrons que les facteurs de variabilité (du sous-espace) de la JFA peuvent être utilisés avec succès pour la détection de la source. Ceci ouvre la perspective intéressante de subdiviser les5données en catégories de canal source qui sont établies de manière automatique. En plus de pouvoir s’adapter à des nouvelles conditions de source, cette propriété permettrait de pouvoir travailler avec des données d’entraînement qui ne sont pas accompagnées d’étiquettes sur le canal de source. L’approche JFA permet une réduction de la mesure de coûts allant jusqu’à généraux 72% relatives, comparé au système GMM-UBM de base. En utilisant des systèmes spécifiques à la source, suivis d’un sélecteur de scores, nous obtenons une amélioration relative de 81%. / Language Recognition is the problem of discovering the language of a spoken definitionutterance. This thesis achieves this goal by using short term acoustic information within a GMM-UBM approach.The main problem of many pattern recognition applications is the variability of problemthe observed data. In the context of Language Recognition (LR), this troublesomevariability is due to the speaker characteristics, speech evolution, acquisition and transmission channels.In the context of Speaker Recognition, the variability problem is solved by solutionthe Joint Factor Analysis (JFA) technique. Here, we introduce this paradigm toLanguage Recognition. The success of JFA relies on several assumptions: The globalJFA assumption is that the observed information can be decomposed into a universalglobal part, a language-dependent part and the language-independent variabilitypart. The second, more technical assumption consists in the unwanted variability part to be thought to live in a low-dimensional, globally defined subspace. In this work, we analyze how JFA behaves in the context of a GMM-UBM LR framework. We also introduce and analyze its combination with Support Vector Machines(SVMs).The first JFA publications put all unwanted information (hence the variability) improvemen tinto one and the same component, which is thought to follow a Gaussian distribution.This handles diverse kinds of variability in a unique manner. But in practice,we observe that this hypothesis is not always verified. We have for example thecase, where the data can be divided into two clearly separate subsets, namely datafrom telephony and from broadcast sources. In this case, our detailed investigations show that there is some benefit of handling the two kinds of data with two separatesystems and then to elect the output score of the system, which corresponds to the source of the testing utterance.For selecting the score of one or the other system, we need a channel source related analyses detector. We propose here different novel designs for such automatic detectors.In this framework, we show that JFA’s variability factors (of the subspace) can beused with success for detecting the source. This opens the interesting perspectiveof partitioning the data into automatically determined channel source categories,avoiding the need of source-labeled training data, which is not always available.The JFA approach results in up to 72% relative cost reduction, compared to the overall resultsGMM-UBM baseline system. Using source specific systems followed by a scoreselector, we achieve 81% relative improvement.

Page generated in 0.0951 seconds