• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 200
  • 70
  • 23
  • 22
  • 21
  • 8
  • 5
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 442
  • 442
  • 442
  • 177
  • 145
  • 99
  • 86
  • 73
  • 72
  • 58
  • 55
  • 55
  • 54
  • 49
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Reconhecimento de assinaturas baseado em seus ruídos caligráficos / Recognition of signatures based on their calligraphic noise

Escola, João Paulo Lemos 04 February 2014 (has links)
A biometria é o processo de reconhecimento dos seres vivos baseado em suas características fisiológicas ou comportamentais. Existem atualmente diversos métodos biométricos e a assinatura em papel é uma das técnicas de mensuração comportamental mais antigas. Por meio do processamento de sinais de áudio, é possível realizar o reconhecimento de padrões dos ruídos emitidos pela caneta ao assinar. Com o objetivo de aumentar o grau de sucesso ao validar a assinatura realizada por uma pessoa, este trabalho propõe uma técnica baseada em um algoritmo que combina Máquinas de Vetores de Suporte (SVMs), treinadas com o uso de um procedimento de aprendizado semi-supervisionado, alimentadas por um conjunto de parâmetros obtidos com o uso da Transformada Wavelet Discreta do sinal de áudio do ruído emitido pela caneta ao assinar sobre uma superfície rígida. Os testes realizados com uma base de dados de assinaturas reais, testando diversos filtros wavelet, demonstram a eficácia da técnica proposta. / Biometrics is the process of recognition of human beings based on their physiological or behavioral characteristics. There are, currently, several methods and biometric signatures on papers is one of the oldest techniques for measuring behavioral characteristics. By digital processing the audio signals, it is possible to recognize noise emitted by pens when signing. To increase the degree of success, this work propõe a technique based on an algorithm that combines two Support Vector Machines (SVMs), trained using a semi-supervised learning procedure, fed by a set of parameters obtained using the Discrete Wavelet Transform of the audio signals produced by the noise emitted by the pen when signing on a hard surface. Tests conducted with a database of signatures, trying many wavelet filters, demonstrate the real effectiveness of the proposed approach.
202

Análise de wavelets com máquina de vetor de suporte no eletrencefalograma da doença de Alzheimer / Wavelets analysis with support vector machine in Alzheimer\'s disease EEG

Kanda, Paulo Afonso Medeiros 07 March 2013 (has links)
INTRODUÇÃO. O objetivo deste estudo foi responder se a transformada wavelet Morlet e as técnicas de aprendizagem de Máquina (ML), chamada Máquinas de Vetores de Suporte (SVM) são adequadas para procurar padrões no EEG que diferenciem controles normais de pacientes com DA. Não há um teste de diagnóstico específico para a doença de Alzheimer (DA). O diagnóstico da DA baseia-se na história clínica, neuropsicológica, exames laboratoriais, neuroimagem e eletroencefalografia. Portanto, novas abordagens são necessárias para permitir um diagnóstico mais precoce e preciso e para medir a resposta ao tratamento. EEG quantitativo (EEGq) pode ser utilizado como uma ferramenta de diagnóstico em casos selecionados. MÉTODOS: Os pacientes eram provenientes do Ambulatório do Grupo de Neurologia Cognitiva e do Comportamento (GNCC) da Divisão de Clínica Neurológica do HCFMUSP ou foram avaliados pelo grupo do Laboratório de Eletrencefalografia Cognitiva do CEREDIC HC-FMUSP. Estudamos EEGs de 74 indivíduos normais (33 mulheres/41 homens, com idade média de 67 anos) e 84 pacientes com provável DA leve a moderada (52 mulheres/32 homens, idade média de 74,7 anos. A transformada wavelet e a seleção de atributos foram processadas pelo software Letswave. A análise SVM dos atributos (bandas delta, teta, alfa e beta) foi calculada usando-se a ferramenta WEKA (Waikato Ambiente para Análise do Conhecimento). RESULTADOS: Na classificação dos grupos controles e DA obteve-se Acurácia de 90,74% e área ROC de 0,90. Na identificação de um único probando dentre todos os demais se conseguiu acurácia de 81,01% e área ROC de 0,80. Desenvolveu-se um método de processamento de EEG quantitativo (EEGq) para uso na diferenciação automática de pacientes com DA versus indivíduos normais. O processo destina-se a contribuir como complemento ao diagnóstico de demência provável principalmente em serviços de saúde onde os recursos sejam limitados / INTRODUCTION. The aim of this study was to answer if Morlet wavelet transform and machine learning techniques (ML), called Support Vector Machines (SVM) are suitable to look for patterns in EEG to differentiate normal controls from patients with AD. There is not a specific diagnostic test for Alzheimer\'s disease (AD). The diagnosis of AD is based on clinical history, neuropsychological testing, laboratory, neuroimaging and electroencephalography. Therefore, new approaches are needed to allow an early diagnosis and accurate to measure response to treatment. Quantitative EEG (qEEG) can be used as a diagnostic tool in selected cases. METHODS: The patients came from the Clinic Group Cognitive Neurology and Behavior (GNCC), Division of Clinical Neurology HCFMUSP or evaluated by the group of the Laboratory of Cognitive electroencephalography CEREDIC HCFMUSP. We studied EEGs of 74 normal subjects (33 females/41 men, mean age 67 years) and 84 patients with mild to moderate probable AD (52 females/32 men, mean age 74.7 years. Wavelet transform and the selection of attributes were processed by software Letswave. SVM analysis of attributes (bands delta, theta, alpha and beta) was calculated using the tool WEKA (Waikato Environment for Knowledge analysis). RESULTS: The group classification of controls and DA obtained an accuracy of 90.74% and ROC area 0.90. The identification of a unique proband among all others was achieved with accuracy of 81.01% and ROC area 0.80. It was developed a method of processing EEG quantitative (qEEG) for use in automatic differentiation of AD patients versus normal subjects. This process is intended to complement the diagnosis of probable dementia primarily in health services where resources are limited
203

Uma abordagem multinível usando algoritmos genéticos em um comitê de LS-SVM

Padilha, Carlos Alberto de Araújo January 2018 (has links)
Há muitos anos, os sistemas de comitê já tem se mostrado um método eficiente para aumentar a acurácia e estabilidade de algoritmos de aprendizado nas décadas recentes, embora sua construção tem uma questão para ser elucidada: diversidade. O desacordo entre os modelos que compõe o comitê pode ser gerado quando eles são contruídos sob diferentes circunstâncias, tais como conjunto de dados de treinamento, configuração dos parâmetros e a seleção dos algoritmos de aprendizado. O ensemble pode ser visto como uma estrutura com três níveis: espaço de entrada, a base de componentes e o bloco de combinação das respostas dos componentes. Neste trabalho é proposta uma abordagem multi-nível usando Algoritmos Genéticos para construir um ensemble de Máquinas de Vetor de Suporte por Mínimos Quadrados ou LS-SVM, realizando uma seleção de atributos no espaço de entrada, parametrização e a escolha de quais modelos irão compor o comitê no nível de componentes e a busca por um vetor de pesos que melhor represente a importância de cada classificador na resposta final do comitê. De forma a avaliar a performance da abordagem proposta, foram utilizados alguns benchmarks do repositório da UCI para comparar com outros algoritmos de classificação. Além disso, também foram comparados os resultados da abordagem proposta com métodos de aprendizagem profunda nas bases de dados MNIST e CIFAR e se mostraram bastante satisfatórios. / Many years ago, the ensemble systems have been shown to be an efficient method to increase the accuracy and stability of learning algorithms in recent decades, although its construction has a question to be elucidated: diversity. The disagreement among the models that compose the ensemble can be generated when they are built under different circumstances, such as training dataset, parameter setting and selection of learning algorithms. The ensemble may be viewed as a structure with three levels: input space, the base components and the combining block of the components responses. In this work is proposed a multi-level approach using genetic algorithms to build the ensemble of Least Squares Support Vector Machines (LS-SVM), performing a feature selection in the input space, the parameterization and the choice of which models will compose the ensemble at the component level and finding a weight vector which best represents the importance of each classifier in the final response of the ensemble. In order to evaluate the performance of the proposed approach, some benchmarks from UCI Repository have been used to compare with other classification algorithms. Also, the results obtained by our approach were compared with some deep learning methods on the datasets MNIST and CIFAR and proved very satisfactory.
204

Detecção inteligente de patologias na laringe baseada em máquinas de vetores de suporte e na transformada wavelet / Intelligent detection of larynx pathologies based on support vector machines and wavelet transform

Leonardo Mendes de Souza 07 February 2011 (has links)
A detecção de patologias na laringe tem ocorrido basicamente por meio de diagnósticos médicos apoiados em videolaringoscopia, que é considerado um procedimento invasivo e causa certo deconforto ao paciente. Além disso, esse tipo de exame é realizado com solicitação médica e apenas quando as alterações na fala já são marcantes ou estão causando dor. Nesse ponto, muitas vezes a doença já está em grau avançado, dificultando o seu tratamento. Com o objetivo de realizar um pré-diagnóstico de tais patologias, este trabalho propõe uma técnica não invasiva baseada em um novo algoritmo que combina duas Máquinas de Vetores de Suporte, treinadas com o uso de um procedimento de aprendizado semi-supervisionado, alimentadas por um conjunto de parâmetros obtidos com o uso da Transformada Wavelet Discreta do sinal de voz do locutor. Os testes realizados com uma base de dados de vozes normais e afetadas por diversas patologias demonstram a eficácia da técnica proposta que pode, inclusive, ser implementada em tempo-real. / Larynx pathology detection is a process that depends basically on medical diagnosis and is based on videolaringoscopy, which is considered as being an invasive and uncomfortable procedure. Furthermore, this kind of examination depends on a physicists requirement and is carried out only when speech is considerably modified or causing pain. At that level, the problem is in an advanced stage which difficults its treatment. In order to get a pre-diagnosis of such pathologies, this work proposes a non-invasive technique which is based on a new algorithm that combines two support vector machines, trained with a semi-supervised approach, powered by a set of parameters derived from the discrete wavelet transform of the speakers voice signal. Tests carried out with the use of a database of normal and pathological voices show the efficacy of the proposed technique which can also be implemented for use in real-time.
205

Video Analysis of Mouth Movement Using Motion Templates for Computer-based Lip-Reading

Yau, Wai Chee, waichee@ieee.org January 2008 (has links)
This thesis presents a novel lip-reading approach to classifying utterances from video data, without evaluating voice signals. This work addresses two important issues which are • the efficient representation of mouth movement for visual speech recognition • the temporal segmentation of utterances from video. The first part of the thesis describes a robust movement-based technique used to identify mouth movement patterns while uttering phonemes. This method temporally integrates the video data of each phoneme into a 2-D grayscale image named as a motion template (MT). This is a view-based approach that implicitly encodes the temporal component of an image sequence into a scalar-valued MT. The data size was reduced by extracting image descriptors such as Zernike moments (ZM) and discrete cosine transform (DCT) coefficients from MT. Support vector machine (SVM) and hidden Markov model (HMM) were used to classify the feature descriptors. A video speech corpus of 2800 utterances was collected for evaluating the efficacy of MT for lip-reading. The experimental results demonstrate the promising performance of MT in mouth movement representation. The advantages and limitations of MT for visual speech recognition were identified and validated through experiments. A comparison between ZM and DCT features indicates that th e accuracy of classification for both methods is very comparable when there is no relative motion between the camera and the mouth. Nevertheless, ZM is resilient to rotation of the camera and continues to give good results despite rotation but DCT is sensitive to rotation. DCT features are demonstrated to have better tolerance to image noise than ZM. The results also demonstrate a slight improvement of 5% using SVM as compared to HMM. The second part of this thesis describes a video-based, temporal segmentation framework to detect key frames corresponding to the start and stop of utterances from an image sequence, without using the acoustic signals. This segmentation technique integrates mouth movement and appearance information. The efficacy of this technique was tested through experimental evaluation and satisfactory performance was achieved. This segmentation method has been demonstrated to perform efficiently for utterances separated with short pauses. Potential applications for lip-reading technologies include human computer interface (HCI) for mobility-impaired users, defense applications that require voice-less communication, lip-reading mobile phones, in-vehicle systems, and improvement of speech-based computer control in noisy environments.
206

Maintenir la viabilité ou la résilience d'un système : les machines à vecteurs de support pour rompre la malédiction de la dimensionnalité ?

Chapel, Laëtitia 19 October 2007 (has links) (PDF)
La théorie de la viabilité propose des concepts et méthodes pour contrôler un système dynamique afin de le maintenir dans un ensemble de contraintes de viabilité. Les applications sont nombreuses en écologie, économie ou robotique, lorsqu'un système meurt ou se détériore lorsqu'il quitte une certaine zone de son espace d'états. A partir du calcul du noyau de viabilité ou du bassin de capture d'un système, elle permet de définir des politiques d'action qui maintiennent le système dans l'ensemble de contraintes choisi. Cependant, les algorithmes classiques d'approximation de noyaux de viabilité ou de bassins de capture présentent certaines limitations ; notamment, ils souffrent de la malédiction de la dimensionnalité et leur application est réservée à des problèmes en petite dimension (dans l'espace d'état et des contrôles). L'objectif de cette thèse est de développer des algorithmes d'approximation de noyaux de viabilité et de bassin de capture plus performants, en utilisant une méthode d'apprentissage statistique particulière : les machines à vecteurs de support (Support Vector Machines - SVMs). Nous proposons un nouvel algorithme d'approximation d'un noyau de viabilité, basé sur l'algorithme de Patrick Saint-Pierre, qui utilise une méthode d'apprentissage pour définir la frontière du noyau. Après avoir rappelé les conditions mathématiques que la procédure doit respecter, nous considérons les SVMs dans ce contexte. Cette méthode d'apprentissage fournit une sorte de fonction barrière à la frontière du noyau, ce qui permet d'utiliser des méthodes d'optimisation pour trouver un contrôle viable, et ainsi de travailler dans des espaces de contrôle plus importants. Cette fonction "barrière" permet également de dériver des politiques de contrôle plus ou moins prudentes. Nous appliquons la procédure à un problème de gestion des pêches, en examinant quelles politiques de pêche permettent de garantir la viabilité d'un écosystème marin. Cet exemple illustre les performances de la méthode proposée : le système comporte 6 variables d'états et 51 variables de contrôle. A partir de l'algorithme d'approximation d'un noyau de viabilité utilisant les SVMs, nous dérivons un algorithme d'approximation de bassin de capture et de résolution de problèmes d'atteinte d'une cible en un temps minimal. Approcher la fonction de temps minimal revient à rechercher le noyau de viabilité d'un système étendu. Nous présentons une procédure qui permet de rester dans l'espace d'état initial, et ainsi d'éviter le coût (en temps de calcul et espace mémoire) de l'addition d'une dimension supplémentaire. Nous décrivons deux variantes de l'algorithme : la première procédure donne une approximation qui converge par l'extérieur et la deuxième par l'intérieur. La comparaison des deux résultats donne une évaluation de l'erreur d'approximation. L'approximation par l'intérieur permet de définir un contrôleur qui garantit d'atteindre la cible en un temps minimal. La procédure peut être étendue au problème de minimisation d'une fonction de coût lorsque celle-ci respecte certaines conditions. Nous illustrons cet aspect sur le calcul de valeurs de résilience. Nous appliquons la procédure sur un problème de calcul de valeurs de résilience sur un modèle d'eutrophication des lacs. Les algorithmes proposés permettent de résoudre le problème de l'augmentation exponentielle du temps de calcul avec la dimension de l'espace des contrôles mais souffrent toujours de la malédiction de la dimensionnalité pour l'espace d'état : la taille du vecteur d'apprentissage augmente exponentiellement avec la dimension de l'espace. Nous introduisons des techniques d'apprentissage actif pour sélectionner les états les plus "informatifs" pour définir la fonction SVM, et ainsi gagner en espace mémoire, tout en gardant une approximation précise du noyau. Nous illustrons la procédure sur un problème de conduite d'un vélo sur un circuit, système défini par six variables d'état.
207

Investigation of multivariate prediction methods for the analysis of biomarker data

Hennerdal, Aron January 2006 (has links)
<p>The paper describes predictive modelling of biomarker data stemming from patients suffering from multiple sclerosis. Improvements of multivariate analyses of the data are investigated with the goal of increasing the capability to assign samples to correct subgroups from the data alone.</p><p>The effects of different preceding scalings of the data are investigated and combinations of multivariate modelling methods and variable selection methods are evaluated. Attempts at merging the predictive capabilities of the method combinations through voting-procedures are made. A technique for improving the result of PLS-modelling, called bagging, is evaluated.</p><p>The best methods of multivariate analysis of the ones tried are found to be Partial least squares (PLS) and Support vector machines (SVM). It is concluded that the scaling have little effect on the prediction performance for most methods. The method combinations have interesting properties – the default variable selections of the multivariate methods are not always the best. Bagging improves performance, but at a high cost. No reasons for drastically changing the work flows of the biomarker data analysis are found, but slight improvements are possible. Further research is needed.</p>
208

Granular Support Vector Machines Based on Granular Computing, Soft Computing and Statistical Learning

Tang, Yuchun 26 May 2006 (has links)
With emergence of biomedical informatics, Web intelligence, and E-business, new challenges are coming for knowledge discovery and data mining modeling problems. In this dissertation work, a framework named Granular Support Vector Machines (GSVM) is proposed to systematically and formally combine statistical learning theory, granular computing theory and soft computing theory to address challenging predictive data modeling problems effectively and/or efficiently, with specific focus on binary classification problems. In general, GSVM works in 3 steps. Step 1 is granulation to build a sequence of information granules from the original dataset or from the original feature space. Step 2 is modeling Support Vector Machines (SVM) in some of these information granules when necessary. Finally, step 3 is aggregation to consolidate information in these granules at suitable abstract level. A good granulation method to find suitable granules is crucial for modeling a good GSVM. Under this framework, many different granulation algorithms including the GSVM-CMW (cumulative margin width) algorithm, the GSVM-AR (association rule mining) algorithm, a family of GSVM-RFE (recursive feature elimination) algorithms, the GSVM-DC (data cleaning) algorithm and the GSVM-RU (repetitive undersampling) algorithm are designed for binary classification problems with different characteristics. The empirical studies in biomedical domain and many other application domains demonstrate that the framework is promising. As a preliminary step, this dissertation work will be extended in the future to build a Granular Computing based Predictive Data Modeling framework (GrC-PDM) with which we can create hybrid adaptive intelligent data mining systems for high quality prediction.
209

SVM-Based Negative Data Mining to Binary Classification

Jiang, Fuhua 03 August 2006 (has links)
The properties of training data set such as size, distribution and the number of attributes significantly contribute to the generalization error of a learning machine. A not well-distributed data set is prone to lead to a partial overfitting model. Two approaches proposed in this dissertation for the binary classification enhance useful data information by mining negative data. First, an error driven compensating hypothesis approach is based on Support Vector Machines (SVMs) with (1+k)-iteration learning, where the base learning hypothesis is iteratively compensated k times. This approach produces a new hypothesis on the new data set in which each label is a transformation of the label from the negative data set, further producing the positive and negative child data subsets in subsequent iterations. This procedure refines the base hypothesis by the k child hypotheses created in k iterations. A prediction method is also proposed to trace the relationship between negative subsets and testing data set by a vector similarity technique. Second, a statistical negative example learning approach based on theoretical analysis improves the performance of the base learning algorithm learner by creating one or two additional hypotheses audit and booster to mine the negative examples output from the learner. The learner employs a regular Support Vector Machine to classify main examples and recognize which examples are negative. The audit works on the negative training data created by learner to predict whether an instance is negative. However, the boosting learning booster is applied when audit does not have enough accuracy to judge learner correctly. Booster works on training data subsets with which learner and audit do not agree. The classifier for testing is the combination of learner, audit and booster. The classifier for testing a specific instance returns the learner's result if audit acknowledges learner's result or learner agrees with audit's judgment, otherwise returns the booster's result. The error of the classifier is decreased to O(e^2) comparing to the error O(e) of a base learning algorithm.
210

Design of Comprehensible Learning Machine Systems for Protein Structure Prediction

Hu, Hae-Jin 06 August 2007 (has links)
With the efforts to understand the protein structure, many computational approaches have been made recently. Among them, the Support Vector Machine (SVM) methods have been recently applied and showed successful performance compared with other machine learning schemes. However, despite the high performance, the SVM approaches suffer from the problem of understandability since it is a black-box model; the predictions made by SVM cannot be interpreted as biologically meaningful way. To overcome this limitation, a new association rule based classifier PCPAR was devised based on the existing classifier, CPAR to handle the sequential data. The performance of the PCPAR was improved more by designing the following two hybrid schemes. The PCPAR/SVM method is a parallel combination of the PCPAR and the SVM and the PCPAR_SVM method is a sequential combination of the PCPAR and the SVM. To understand the SVM prediction, the SVM_PCPAR scheme was developed. The experimental result presents that the PCPAR scheme shows better performance with respect to the accuracy and the number of generated patterns than CPAR method. The PCPAR/SVM scheme presents better performance than the PCPAR, PCPAR_SVM or the SVM_PCPAR and almost equal performance to the SVM. The generated patterns are easily understandable and biologically meaningful. The system sturdiness evaluation and the ROC curve analysis proved that this new scheme is robust and competent.

Page generated in 0.0686 seconds