Spelling suggestions: "subject:"8upport vector machines."" "subject:"6upport vector machines.""
261 |
Máquinas de Vetores Suporte e a Análise de Gestos: incorporando aspectos temporais / Support Vector Machines and Gesture Analysis: incorporating temporal aspectsRenata Cristina Barros Madeo 15 May 2013 (has links)
Recentemente, tem se percebido um interesse maior da área de computação pela pesquisa em análise de gestos. Parte dessas pesquisas visa dar suporte aos pesquisadores da área de \"estudos dos gestos\", que estuda o uso de partes do corpo para fins comunicativos. Pesquisadores dessa área analisam gestos a partir de transcrições de conversas ou discursos gravados em vídeo. Para a transcrição dos gestos, geralmente realiza-se a sua segmentação em unidades gestuais e fases. O presente trabalho tem por objetivo desenvolver estratégias para segmentação automatizada das unidades gestuais e das fases dos gestos contidos em um vídeo no contexto de contação de histórias, formulando o problema como uma tarefa de classificação supervisionada. As Máquinas de Vetores Suporte foram escolhidas como método de classificação, devido à sua capacidade de generalização e aos bons resultados obtidos para diversos problemas complexos. Máquinas de Vetores Suporte, porém, não consideram os aspectos temporais dos dados, características que são importantes na análise dos gestos. Por esse motivo, este trabalho investiga métodos de representação temporal e variações das Máquinas de Vetores Suporte que consideram raciocínio temporal. Vários experimentos foram executados neste contexto para segmentação de unidades gestuais. Os melhores resultados foram obtidos com Máquinas de Vetores Suporte tradicionais aplicadas a dados janelados. Além disso, três estratégias de classificação multiclasse foram aplicadas ao problema de segmentação das fases dos gestos. Os resultados indicam que um bom desempenho para a segmentação de gestos pode ser obtido ao realizar o treinamento da estratégia com um trecho inicial do vídeo para obter uma segmentação automatizada do restante do vídeo. Assim, os pesquisadores da área de \"estudos dos gestos\" poderiam segmentar manualmente apenas um trecho do vídeo, reduzindo o tempo necessário para realizar a análise dos gestos presentes em gravações longas. / Recently, it has been noted an increasing interest from computer science for research on gesture analysis. Some of these researches aims at supporting researchers from \"gesture studies\", which studies the use of several body parts for communicative purposes. Researchers of \"gesture studies\" analyze gestures from transcriptions of conversations and discourses recorded in video. For gesture transcriptions, gesture unit segmentation and gesture phase segmentation are usually employed. This study aims to develop strategies for automated segmentation of gestural units and phases of gestures contained in a video in the context of storytelling, formulating the problem as a supervised classification task. Support Vector Machines were selected as classification method, because of its ability to generalize and good results obtained for many complex problems. Support Vector Machines, however, do not consider the temporal aspects of data, characteristics that are important for gesture analysis. Therefore, this paper investigates methods of temporal representation and variations of the Support Vector machines that consider temporal reasoning. Several experiments were performed in this context for gesture units segmentation. The best results were obtained with traditional Support Vector Machines applied to windowed data. In addition, three strategies of multiclass classification were applied to the problem of gesture phase segmentation. The results indicate that a good performance for gesture segmentation can be obtained by training the strategy with an initial part of the video to get an automated segmentation of the rest of the video. Thus, researchers in \"gesture studies\" could manually segment only part of the video, reducing the time needed to perform the analysis of gestures contained in long recordings.
|
262 |
Mapeamento de ambientes externos utilizando robôs móveis / Outdoor mapping using mobile robotsHata, Alberto Yukinobu 24 May 2010 (has links)
A robótica móvel autônoma é uma área relativamente recente que tem como objetivo a construção de mecanismos capazes de executar tarefas sem a necessidade de um controlador humano. De uma forma geral, a robótica móvel defronta com três problemas fundamentais: mapeamento de ambientes, localização e navegação do robô. Sem esses elementos, o robô dificilmente poderia se deslocar autonomamente de um lugar para outro. Um dos problemas existentes nessa área é a atuação de robôs móveis em ambientes externos como parques e regiões urbanas, onde a complexidade do cenário é muito maior em comparação aos ambientes internos como escritórios e casas. Para exemplificar, nos ambientes externos os sensores estão sujeitos às condições climáticas (iluminação do sol, chuva e neve). Além disso, os algoritmos de navegação dos robôs nestes ambientes devem tratar uma quantidade bem maior de obstáculos (pessoas, animais e vegetações). Esta dissertação apresenta o desenvolvimento de um sistema de classificação da navegabilidade de terrenos irregulares, como por exemplo, ruas e calçadas. O mapeamento do cenário é realizado através de uma plataforma robótica equipada com um sensor laser direcionado para o solo. Foram desenvolvidos dois algoritmos para o mapeamento de terrenos. Um para a visualização dos detalhes finos do ambiente, gerando um mapa de nuvem de pontos e outro para a visualização das regiões próprias e impróprias para o tráfego do robô, resultando em um mapa de navegabilidade. No mapa de navegabilidade, são utilizados métodos de aprendizado de máquina supervisionado para classificar o terreno em navegável (regiões planas), parcialmente navegável (grama, casacalho) ou não navegável (obstáculos). Os métodos empregados foram, redes neurais artificais e máquinas de suporte vetorial. Os resultados de classificação obtidos por ambos foram posteriormente comparados para determinar a técnica mais apropriada para desempenhar esta tarefa / Autonomous mobile robotics is a recent research area that focus on the construction of mechanisms capable of executing tasks without a human control. In general, mobile robotics deals with three fundamental problems: environment mapping, robot localization and navigation. Without these elements, the robot hardly could move autonomously from a place to another. One problem of this area is the operation of the mobile robots in outdoors (e.g. parks and urban areas), which are considerably more complex than indoor environments (e.g. offices and houses). To exemplify, in outdoor environments, sensors are subjected to weather conditions (sunlight, rain and snow), besides that the navigation algorithms must process a larger quantity of obstacles (people, animals and vegetation). This dissertation presents the development of a system that classifies the navigability of irregular terrains, like streets and sidewalks. The scenario mapping has been done using a robotic platform equipped with a laser range finder sensor directed to the ground. Two terrain mapping algorithms has been devolped. One for environment fine details visualization, generating a point cloud map, and other to visualize appropriated and unappropriated places to robot navigation, resulting in a navigability map. In this map, it was used supervised learning machine methods to classify terrain portions in navigable (plane regions), partially navigable (grass, gravel) or non-navigable (obstacles). The classification methods employed were artificial neural networks and support vector machines. The classification results obtained by both were later compared to determine the most appropriated technique to execute this task
|
263 |
Contribution des moyens de production dispersés aux courants de défaut. Modélisation des moyens de production et algorithmes de détection de défaut. / Fault current contribution from Distributed Generators (DGs). Modelling of DGs and fault detection algorithms.Le, Trung Dung 28 February 2014 (has links)
Les travaux de la thèse se focalisent sur la protection des réseaux de distribution HTA en présence des générateurs distribués (éoliennes, fermes solaires,...). Dans un premier temps, un état de l’art a été réalisé sur les comportements des générateurs en creux de tension, leurs impacts sur le système de protection et les pistes de solution proposées pour y remédier. L’étape suivante est la mise au point d’algorithmes directionnels de détection de défauts, sans mesure de tension. Ces algorithmes s’appuient sur la décomposition en composantes symétriques des courants mesurés. Ces relais doivent empêcher le déclenchement intempestif de protections de surintensité dû au courant de défaut provenant des générateurs distribués. Ils sont moins coûteux par rapport à ceux traditionnels car les capteurs de tension, qui sont indispensables pour ces derniers, peuvent être enlevés. Après détection d’un défaut sur critère de seuil simple (max de I ou max de I résiduel), la direction est évaluée à l’aide d’un algorithme en delta basé sur les rapports courants inverse-homopolaire ou inverse-direct, selon le type de défaut (monophasé ou biphasé). En se basant sur ces rapports, un classifieur SVM (Support Vector Machines), entrainé préalablement à partir des simulations, donne ensuite l’estimation de la direction du défaut (amont ou aval par rapport au relais). La bonne performance de ces algorithmes a été montrée dans la thèse pour différents paramètres du réseau et en présence de différents types de générateurs. Le développement de tels algorithmes favorise la mise en œuvre des protections en réseau, qui pourraient être installées dans les futurs Smart Grids. / This research focuses on the protection of MV distribution networks with Distributed Generators (DGs), such as wind farms or photovoltaic farms, etc. First, the state of art is carried out on fault behaviour of DGs, their impacts on protection system and the mitigation solutions. Next, algorithms are developed for directional relays without voltage sensors. Based on the symmetrical component method, these algorithms help the overcurrent protections to avoid the false tripping issue due to fault contribution of DGs. With the suppression of voltage sensors, such directional relays become cheaper in comparison with the traditional ones. Following the fault detection (the phase or residual current reaches the pick-up value) and depending on fault type (line-to-ground or line-to-line fault), the ratios between the variation (before and during fault) of negative-zero sequence or negative-positive sequence currents are calculated. From these ratios, a SVM (Support Vector Machines) classifier estimates the fault direction (upstream or downstream the detector). The classifier is trained beforehand from transient simulations. This survey shows good performances of the directional algorithms with different network parameters and different kinds of DGs. Such algorithms could be implemented in protections along the feeders in the future smart grids.
|
264 |
W-operator learning using linear models for both gray-level and binary inputs / Aprendizado de w-operadores usando modelos lineares para imagens binárias e em níveis de cinzaMontagner, Igor dos Santos 12 June 2017 (has links)
Image Processing techniques can be used to solve a broad range of problems, such as medical imaging, document processing and object segmentation. Image operators are usually built by combining basic image operators and tuning their parameters. This requires both experience in Image Processing and trial-and-error to get the best combination of parameters. An alternative approach to design image operators is to estimate them from pairs of training images containing examples of the expected input and their processed versions. By restricting the learned operators to those that are translation invariant and locally defined ($W$-operators) we can apply Machine Learning techniques to estimate image transformations. The shape that defines which neighbors are used is called a window. $W$-operators trained with large windows usually overfit due to the lack sufficient of training data. This issue is even more present when training operators with gray-level inputs. Although approaches such as the two-level design, which combines multiple operators trained on smaller windows, partly mitigates these problems, they also require more complicated parameter determination to achieve good results. In this work we present techniques that increase the window sizes we can use and decrease the number of manually defined parameters in $W$-operator learning. The first one, KA, is based on Support Vector Machines and employs kernel approximations to estimate image transformations. We also present adequate kernels for processing binary and gray-level images. The second technique, NILC, automatically finds small subsets of operators that can be successfully combined using the two-level approach. Both methods achieve competitive results with methods from the literature in two different application domains. The first one is a binary document processing problem common in Optical Music Recognition, while the second is a segmentation problem in gray-level images. The same techniques were applied without modification in both domains. / Processamento de imagens pode ser usado para resolver problemas em diversas áreas, como imagens médicas, processamento de documentos e segmentação de objetos. Operadores de imagens normalmente são construídos combinando diversos operadores elementares e ajustando seus parâmetros. Uma abordagem alternativa é a estimação de operadores de imagens a partir de pares de exemplos contendo uma imagem de entrada e o resultado esperado. Restringindo os operadores considerados para o que são invariantes à translação e localmente definidos ($W$-operadores), podemos aplicar técnicas de Aprendizagem de Máquina para estimá-los. O formato que define quais vizinhos são usadas é chamado de janela. $W$-operadores treinados com janelas grandes frequentemente tem problemas de generalização, pois necessitam de grandes conjuntos de treinamento. Este problema é ainda mais grave ao treinar operadores em níveis de cinza. Apesar de técnicas como o projeto dois níveis, que combina a saída de diversos operadores treinados com janelas menores, mitigar em parte estes problemas, uma determinação de parâmetros complexa é necessária. Neste trabalho apresentamos duas técnicas que permitem o treinamento de operadores usando janelas grandes. A primeira, KA, é baseada em Máquinas de Suporte Vetorial (SVM) e utiliza técnicas de aproximação de kernels para realizar o treinamento de $W$-operadores. Uma escolha adequada de kernels permite o treinamento de operadores níveis de cinza e binários. A segunda técnica, NILC, permite a criação automática de combinações de operadores de imagens. Este método utiliza uma técnica de otimização específica para casos em que o número de características é muito grande. Ambos métodos obtiveram resultados competitivos com algoritmos da literatura em dois domínio de aplicação diferentes. O primeiro, Staff Removal, é um processamento de documentos binários frequente em sistemas de reconhecimento ótico de partituras. O segundo é um problema de segmentação de vasos sanguíneos em imagens em níveis de cinza.
|
265 |
Deteção de extra-sístoles ventricularesSilva, Aurélio Filipe de Sousa e January 2012 (has links)
Tese de mestrado integrado. Bioengenharia. Área de Especialização de Engenharia Biomédica. Faculdade de Engenharia. Universidade do Porto. 2012
|
266 |
Novel Support Vector Machines for Diverse Learning ParadigmsMelki, Gabriella A 01 January 2018 (has links)
This dissertation introduces novel support vector machines (SVM) for the following traditional and non-traditional learning paradigms: Online classification, Multi-Target Regression, Multiple-Instance classification, and Data Stream classification.
Three multi-target support vector regression (SVR) models are first presented. The first involves building independent, single-target SVR models for each target. The second builds an ensemble of randomly chained models using the first single-target method as a base model. The third calculates the targets' correlations and forms a maximum correlation chain, which is used to build a single chained SVR model, improving the model's prediction performance, while reducing computational complexity.
Under the multi-instance paradigm, a novel SVM multiple-instance formulation and an algorithm with a bag-representative selector, named Multi-Instance Representative SVM (MIRSVM), are presented. The contribution trains the SVM based on bag-level information and is able to identify instances that highly impact classification, i.e. bag-representatives, for both positive and negative bags, while finding the optimal class separation hyperplane. Unlike other multi-instance SVM methods, this approach eliminates possible class imbalance issues by allowing both positive and negative bags to have at most one representative, which constitute as the most contributing instances to the model.
Due to the shortcomings of current popular SVM solvers, especially in the context of large-scale learning, the third contribution presents a novel stochastic, i.e. online, learning algorithm for solving the L1-SVM problem in the primal domain, dubbed OnLine Learning Algorithm using Worst-Violators (OLLAWV). This algorithm, unlike other stochastic methods, provides a novel stopping criteria and eliminates the need for using a regularization term. It instead uses early stopping. Because of these characteristics, OLLAWV was proven to efficiently produce sparse models, while maintaining a competitive accuracy.
OLLAWV's online nature and success for traditional classification inspired its implementation, as well as its predecessor named OnLine Learning Algorithm - List 2 (OLLA-L2), under the batch data stream classification setting. Unlike other existing methods, these two algorithms were chosen because their properties are a natural remedy for the time and memory constraints that arise from the data stream problem. OLLA-L2's low spacial complexity deals with memory constraints imposed by the data stream setting, and OLLAWV's fast run time, early self-stopping capability, as well as the ability to produce sparse models, agrees with both memory and time constraints. The preliminary results for OLLAWV showed a superior performance to its predecessor and was chosen to be used in the final set of experiments against current popular data stream methods.
Rigorous experimental studies and statistical analyses over various metrics and datasets were conducted in order to comprehensively compare the proposed solutions against modern, widely-used methods from all paradigms. The experimental studies and analyses confirm that the proposals achieve better performances and more scalable solutions than the methods compared, making them competitive in their respected fields.
|
267 |
MACHINE LEARNING FOR MECHANICAL ANALYSISBengtsson, Sebastian January 2019 (has links)
It is not reliable to depend on a persons inference on dense data of high dimensionality on a daily basis. A person will grow tired or become distracted and make mistakes over time. Therefore it is desirable to study the feasibility of replacing a persons inference with that of Machine Learning in order to improve reliability. One-Class Support Vector Machines (SVM) with three different kernels (linear, Gaussian and polynomial) are implemented and tested for Anomaly Detection. Principal Component Analysis is used for dimensionality reduction and autoencoders are used with the intention to increase performance. Standard soft-margin SVMs were used for multi-class classification by utilizing the 1vsAll and 1vs1 approaches with the same kernels as for the one-class SVMs. The results for the one-class SVMs and the multi-class SVM methods are compared against each other within their respective applications but also against the performance of Back-Propagation Neural Networks of varying sizes. One-Class SVMs proved very effective in detecting anomalous samples once both Principal Component Analysis and autoencoders had been applied. Standard SVMs with Principal Component Analysis produced promising classification results. Twin SVMs were researched as an alternative to standard SVMs.
|
268 |
Détection de mots clés dans un flux de paroleBen Ayed, Yassine 23 December 2003 (has links) (PDF)
La reconnaissance automatique de la parole suscite actuellement un grand intérêt. En particulier, la détection de mots clés qui constitue une branche importante de l'interaction homme-machine vu le besoin de communiquer avec nos machines d'une façon naturelle et directe en utilisant la parole spontanée. Cette technique consiste à détecter dans une phrase prononcée, les mots clés caractérisant l'application et de rejeter les mots hors-vocabulaire ainsi que les hésitations, les faux départs etc.<br />Le travail que nous présentons dans ce manuscrit s'inscrit dans le cadre de la détection de mots clés dans un flux de parole. Tout d'abord, nous proposons de nouveaux modèles ``poubelles'' fondés sur la modélisation des mots hors-vocabulaire. Puis nous introduisons la reconnaissance à base de boucle de phonèmes, dans laquelle nous appliquons différentes fonctions de récompense favorisant la reconnaissance des mots clés.<br />Ensuite nous proposons l'utilisation des mesures de confiance afin de pouvoir prendre la décision de rejeter ou d'accepter un mot clé hypothèse. Les différentes mesures de confiance proposées sont basées sur la probabilité d'observation acoustique locale. En premier lieu, nous utilisons les moyennes arithmétique, géométrique et harmonique comme mesures de confiance pour chaque mot clé. En second lieu, nous proposons de calculer la mesure de confiance en se basant sur la méthode à base de boucle de phonèmes. <br />Enfin nous présentons le problème de détection comme un problème de classification où chaque mot clé peut appartenir à deux classes différentes, à savoir ``correct'' et ``incorrect''. Cette classification est réalisée en utilisant des Support Vector Machines (SVM) qui constituent une nouvelle technique d'apprentissage statistique. Chaque mot clé reconnu est représenté par un vecteur caractéristique qui constitue l'entrée du classifieur SVM. Pour déterminer ce vecteur, nous utilisons la probabilité d'observation acoustique locale et nous introduisons ensuite la durée de chaque état. Afin d'améliorer les performances, nous proposons des approches hybrides combinant les modèles poubelles avec mesure de confiance et mesure de confiance avec SVM.<br />Pour tester les performances de l'ensemble de ces modèles nous utilisons la base de données française SPEECHDAT. L'évaluation de tous les résultats a été réalisée en se basant sur les courbes ROC et les courbes rappel/précision. Les meilleurs résultats ont été obtenus par les méthodes basées sur l'utilisation des SVM. Les méthodes hybrides nous ont permis aussi de réaliser de bonnes performances.
|
269 |
Comparing Support Vector Machines with Gaussian Kernels to Radial Basis Function ClassifiersSchoelkopf, B., Sung, K., Burges, C., Girosi, F., Niyogi, P., Poggio, T., Vapnik, V. 01 December 1996 (has links)
The Support Vector (SV) machine is a novel type of learning machine, based on statistical learning theory, which contains polynomial classifiers, neural networks, and radial basis function (RBF) networks as special cases. In the RBF case, the SV algorithm automatically determines centers, weights and threshold such as to minimize an upper bound on the expected test error. The present study is devoted to an experimental comparison of these machines with a classical approach, where the centers are determined by $k$--means clustering and the weights are found using error backpropagation. We consider three machines, namely a classical RBF machine, an SV machine with Gaussian kernel, and a hybrid system with the centers determined by the SV method and the weights trained by error backpropagation. Our results show that on the US postal service database of handwritten digits, the SV machine achieves the highest test accuracy, followed by the hybrid approach. The SV approach is thus not only theoretically well--founded, but also superior in a practical application.
|
270 |
Multitemporal Spaceborne Polarimetric SAR Data for Urban Land Cover MappingNiu, Xin January 2012 (has links)
Urban land cover mapping represents one of the most important remote sensing applications in the context of rapid global urbanization. In recent years, high resolution spaceborne Polarimetric Synthetic Aperture Radar (PolSAR) has been increasingly used for urban land cover/land-use mapping, since more information could be obtained in multiple polarizations and the collection of such data is less influenced by solar illumination and weather conditions. The overall objective of this research is to develop effective methods to extract accurate and detailed urban land cover information from spaceborne PolSAR data. Six RADARSAT-2 fine-beam polarimetric SAR and three RADARSAT-2 ultra-fine beam SAR images were used. These data were acquired from June to September 2008 over the north urban-rural fringe of the Greater Toronto Area, Canada. The major landuse/land-cover classes in this area include high-density residential areas, low-density residential areas, industrial and commercial areas, construction sites, roads, streets, parks, golf courses, forests, pasture, water and two types of agricultural crops. In this research, various polarimetric SAR parameters were evaluated for urban land cover mapping. They include the parameters from Pauli, Freeman and Cloude-Pottier decompositions, coherency matrix, intensities of each polarization and their logarithms. Both object-based and pixel-based classification approaches were investigated. Through an object-based Support Vector Machine (SVM) and a rule-based approach, efficiencies of various PolSAR features and the multitemporal data combinations were evaluated. For the pixel-based approach, a contextual Stochastic Expectation-Maximization (SEM) algorithm was proposed. With an adaptive Markov Random Field (MRF) and a modified Multiscale Pappas Adaptive Clustering (MPAC), contextual information was explored to improve the mapping results. To take full advantages of alternative PolSAR distribution models, a rule-based model selection approach was put forward in comparison with a dictionary-based approach. Moreover, the capability of multitemporal fine-beam PolSAR data was compared with multitemporal ultra-fine beam C-HH SAR data. Texture analysis and a rule-based approach which explores the object features and the spatial relationships were applied for further improvement. Using the proposed approaches, detailed urban land-cover classes and finer urban structures could be mapped with high accuracy in contrast to most of the previous studies which have only focused on the extraction of urban extent or the mapping of very few urban classes. It is also one of the first comparisons of various PolSAR parameters for detailed urban mapping using an object-based approach. Unlike other multitemporal studies, the significance of complementary information from both ascending and descending SAR data and the temporal relationships in the data were the focus in the multitemporal analysis. Further, the proposed novel contextual analyses could effectively improve the pixel-based classification accuracy and present homogenous results with preserved shape details avoiding over-averaging. The proposed contextual SEM algorithm, which is one of the first to combine the adaptive MRF and the modified MPAC, was able to mitigate the degenerative problem in the traditional EM algorithms with fast convergence speed when dealing with many classes. This contextual SEM outperformed the contextual SVM in certain situations with regard to both accuracy and computation time. By using such a contextual algorithm, the common PolSAR data distribution models namely Wishart, G0p, Kp and KummerU were compared for detailed urban mapping in terms of both mapping accuracy and time efficiency. In the comparisons, G0p, Kp and KummerU demonstrated better performances with higher overall accuracies than Wishart. Nevertheless, the advantages of Wishart and the other models could also be effectively integrated by the proposed rule-based adaptive model selection, while limited improvement could be observed by the dictionary-based selection, which has been applied in previous studies. The use of polarimetric SAR data for identifying various urban classes was then compared with the ultra-fine-beam C-HH SAR data. The grey level co-occurrence matrix textures generated from the ultra-fine-beam C-HH SAR data were found to be more efficient than the corresponding PolSAR textures for identifying urban areas from rural areas. An object-based and pixel-based fusion approach that uses ultra-fine-beam C-HH SAR texture data with PolSAR data was developed. In contrast to many other fusion approaches that have explored pixel-based classification results to improve object-based classifications, the proposed rule-based fusion approach using the object features and contextual information was able to extract several low backscatter classes such as roads, streets and parks with reasonable accuracy. / <p>QC 20121112</p>
|
Page generated in 0.0795 seconds