• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 200
  • 70
  • 23
  • 22
  • 21
  • 8
  • 5
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 443
  • 443
  • 443
  • 178
  • 146
  • 99
  • 86
  • 73
  • 72
  • 58
  • 56
  • 55
  • 54
  • 50
  • 48
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Detection of deceptive reviews : using classification and natural language processing features

Fernquist, Johan January 2016 (has links)
With the great growth of open forums online where anyone can givetheir opinion on everything, the Internet has become a place wherepeople are trying to mislead others. By assuming that there is acorrelation between a deceptive text's purpose and the way to writethe text, our goal with this thesis was to develop a model fordetecting these fake texts by taking advantage of this correlation.Our approach was to use classification together with threedifferent feature types, term frequency-inverse document frequency,word2vec and probabilistic context-free grammar. We have managed todevelop a model which have improved all, to us known, results for twodifferent datasets.With machine translation, we have detected that there is apossibility to hide the stylometric footprints and thecharacteristics of deceptive texts, making it possible to slightlydecrease the accuracy of a classifier and still convey a message.Finally we investigated whether it was possible to train and test ourmodel on data from different sources and managed to achieve anaccuracy hardly better than chance. That indicated the resultingmodel is not versatile enough to be used on different kinds ofdeceptive texts than it has been trained on.
222

Reconhecimento de assinaturas baseado em seus ruídos caligráficos / Recognition of signatures based on their calligraphic noise

João Paulo Lemos Escola 04 February 2014 (has links)
A biometria é o processo de reconhecimento dos seres vivos baseado em suas características fisiológicas ou comportamentais. Existem atualmente diversos métodos biométricos e a assinatura em papel é uma das técnicas de mensuração comportamental mais antigas. Por meio do processamento de sinais de áudio, é possível realizar o reconhecimento de padrões dos ruídos emitidos pela caneta ao assinar. Com o objetivo de aumentar o grau de sucesso ao validar a assinatura realizada por uma pessoa, este trabalho propõe uma técnica baseada em um algoritmo que combina Máquinas de Vetores de Suporte (SVMs), treinadas com o uso de um procedimento de aprendizado semi-supervisionado, alimentadas por um conjunto de parâmetros obtidos com o uso da Transformada Wavelet Discreta do sinal de áudio do ruído emitido pela caneta ao assinar sobre uma superfície rígida. Os testes realizados com uma base de dados de assinaturas reais, testando diversos filtros wavelet, demonstram a eficácia da técnica proposta. / Biometrics is the process of recognition of human beings based on their physiological or behavioral characteristics. There are, currently, several methods and biometric signatures on papers is one of the oldest techniques for measuring behavioral characteristics. By digital processing the audio signals, it is possible to recognize noise emitted by pens when signing. To increase the degree of success, this work propõe a technique based on an algorithm that combines two Support Vector Machines (SVMs), trained using a semi-supervised learning procedure, fed by a set of parameters obtained using the Discrete Wavelet Transform of the audio signals produced by the noise emitted by the pen when signing on a hard surface. Tests conducted with a database of signatures, trying many wavelet filters, demonstrate the real effectiveness of the proposed approach.
223

Classificadores baseados em vetores de suporte gerados a partir de dados rotulados e não-rotulados. / Learning support vector machines from labeled and unlabeled data.

Clayton Silva Oliveira 30 March 2006 (has links)
Treinamento semi-supervisionado é uma metodologia de aprendizado de máquina que conjuga características de treinamento supervisionado e não-supervisionado. Ela se baseia no uso de bases semi-rotuladas (bases contendo dados rotulados e não-rotulados) para o treinamento de classificadores. A adição de dados não-rotulados, mais baratos e geralmente disponíveis em maior quantidade do que os dados rotulados, pode aumentar o desempenho e/ou baratear o custo de treinamento desses classificadores (a partir da diminuição da quantidade de dados rotulados necessários). Esta dissertação analisa duas estratégias para se executar treinamento semi-supervisionado, especificamente em Support Vector Machines (SVMs): formas direta e indireta. A estratégia direta é atualmente mais conhecida e estudada, e permite o uso de dados rotulados e não-rotulados, ao mesmo tempo, em tarefas de aprendizagem de classificadores. Entretanto, a inclusão de muitos dados não-rotulados pode tornar o treinamento demasiadamente lento. Já a estratégia indireta é mais recente, sendo capaz de agregar os benefícios do treinamento semi-supervisionado direto com tempos menores para o aprendizado de classificadores. Esta opção utiliza os dados não-rotulados para pré-processar a base de dados previamente à tarefa de aprendizagem do classificador, permitindo, por exemplo, a filtragem de eventuais ruídos e a reescrita da base em espaços de variáveis mais convenientes. Dentro do escopo da forma indireta, está a principal contribuição dessa dissertação: idealização, implementação e análise do algoritmo split learning. Foram obtidos ótimos resultados com esse algoritmo, que se mostrou eficiente em treinar SVMs de melhor desempenho e em períodos menores a partir de bases semi-rotuladas. / Semi-supervised learning is a machine learning methodology that mixes features of supervised and unsupervised learning. It allows the use of partially labeled databases (databases with labeled and unlabeled data) to train classifiers. The addition of unlabeled data, which are cheaper and generally more available than labeled data, can enhance the performance and/or decrease the costs of learning such classifiers (by diminishing the quantity of required labeled data). This work analyzes two strategies to perform semi-supervised learning, specifically with Support Vector Machines (SVMs): direct and indirect concepts. The direct strategy is currently more popular and studied; it allows the use of labeled and unlabeled data, concomitantly, in learning classifiers tasks. However, the addition of many unlabeled data can lead to very long training times. The indirect strategy is more recent; it is able to attain the advantages of the direct semi-supervised learning with shorter training times. This alternative uses the unlabeled data to pre-process the database prior to the learning task; it allows denoising and rewriting the data in better feature espaces. The main contribution of this Master thesis lies within the indirect strategy: conceptualization, experimentation, and analysis of the split learning algorithm, that can be used to perform indirect semi-supervised learning using SVMs. We have obtained promising empirical results with this algorithm, which is efficient to train better performance SVMs in shorter times from partially labeled databases.
224

Análise de wavelets com máquina de vetor de suporte no eletrencefalograma da doença de Alzheimer / Wavelets analysis with support vector machine in Alzheimer\'s disease EEG

Paulo Afonso Medeiros Kanda 07 March 2013 (has links)
INTRODUÇÃO. O objetivo deste estudo foi responder se a transformada wavelet Morlet e as técnicas de aprendizagem de Máquina (ML), chamada Máquinas de Vetores de Suporte (SVM) são adequadas para procurar padrões no EEG que diferenciem controles normais de pacientes com DA. Não há um teste de diagnóstico específico para a doença de Alzheimer (DA). O diagnóstico da DA baseia-se na história clínica, neuropsicológica, exames laboratoriais, neuroimagem e eletroencefalografia. Portanto, novas abordagens são necessárias para permitir um diagnóstico mais precoce e preciso e para medir a resposta ao tratamento. EEG quantitativo (EEGq) pode ser utilizado como uma ferramenta de diagnóstico em casos selecionados. MÉTODOS: Os pacientes eram provenientes do Ambulatório do Grupo de Neurologia Cognitiva e do Comportamento (GNCC) da Divisão de Clínica Neurológica do HCFMUSP ou foram avaliados pelo grupo do Laboratório de Eletrencefalografia Cognitiva do CEREDIC HC-FMUSP. Estudamos EEGs de 74 indivíduos normais (33 mulheres/41 homens, com idade média de 67 anos) e 84 pacientes com provável DA leve a moderada (52 mulheres/32 homens, idade média de 74,7 anos. A transformada wavelet e a seleção de atributos foram processadas pelo software Letswave. A análise SVM dos atributos (bandas delta, teta, alfa e beta) foi calculada usando-se a ferramenta WEKA (Waikato Ambiente para Análise do Conhecimento). RESULTADOS: Na classificação dos grupos controles e DA obteve-se Acurácia de 90,74% e área ROC de 0,90. Na identificação de um único probando dentre todos os demais se conseguiu acurácia de 81,01% e área ROC de 0,80. Desenvolveu-se um método de processamento de EEG quantitativo (EEGq) para uso na diferenciação automática de pacientes com DA versus indivíduos normais. O processo destina-se a contribuir como complemento ao diagnóstico de demência provável principalmente em serviços de saúde onde os recursos sejam limitados / INTRODUCTION. The aim of this study was to answer if Morlet wavelet transform and machine learning techniques (ML), called Support Vector Machines (SVM) are suitable to look for patterns in EEG to differentiate normal controls from patients with AD. There is not a specific diagnostic test for Alzheimer\'s disease (AD). The diagnosis of AD is based on clinical history, neuropsychological testing, laboratory, neuroimaging and electroencephalography. Therefore, new approaches are needed to allow an early diagnosis and accurate to measure response to treatment. Quantitative EEG (qEEG) can be used as a diagnostic tool in selected cases. METHODS: The patients came from the Clinic Group Cognitive Neurology and Behavior (GNCC), Division of Clinical Neurology HCFMUSP or evaluated by the group of the Laboratory of Cognitive electroencephalography CEREDIC HCFMUSP. We studied EEGs of 74 normal subjects (33 females/41 men, mean age 67 years) and 84 patients with mild to moderate probable AD (52 females/32 men, mean age 74.7 years. Wavelet transform and the selection of attributes were processed by software Letswave. SVM analysis of attributes (bands delta, theta, alpha and beta) was calculated using the tool WEKA (Waikato Environment for Knowledge analysis). RESULTS: The group classification of controls and DA obtained an accuracy of 90.74% and ROC area 0.90. The identification of a unique proband among all others was achieved with accuracy of 81.01% and ROC area 0.80. It was developed a method of processing EEG quantitative (qEEG) for use in automatic differentiation of AD patients versus normal subjects. This process is intended to complement the diagnosis of probable dementia primarily in health services where resources are limited
225

Bridging the capability gap in environmental gamma-ray spectrometry

Varley, A. L. January 2015 (has links)
Environmental gamma-ray spectroscopy provides a powerful tool that can be used in environmental monitoring given that it offers a compromise between measurement time and accuracy allowing for large areas to be surveyed quickly and relatively inexpensively. Depending on monitoring objectives, spectral information can then be analysed in real-time or post survey to characterise contamination and identify potential anomalies. Smaller volume detectors are of particular worth to environmental surveys as they can be operated in the most demanding environments. However, difficulties are encountered in the selection of an appropriate detector that is robust enough for environmental surveying yet still provides a high quality signal. Furthermore, shortcomings remain with methods employed for robust spectral processing since a number of complexities need to be overcome including: the non-linearity in detector response with source burial depth, large counting uncertainties, accounting for the heterogeneity in the natural background and unreliable methods for detector calibration. This thesis aimed to investigate the application of machine learning algorithms to environmental gamma-ray spectroscopy data to identify changes in spectral shape within large Monte Carlo calibration libraries to estimate source characteristics for unseen field results. Additionally, a 71 × 71 mm lanthanum bromide detector was tested alongside a conventional 71 × 71 mm sodium iodide to assess whether its higher energy efficiency and resolution could make it more reliable in handheld surveys. The research presented in this thesis demonstrates that machine learning algorithms could be successfully applied to noisy spectra to produce valuable source estimates. Of note, were the novel characterisation estimates made on borehole and handheld detector measurements taken from land historically contaminated with 226Ra. Through a novel combination of noise suppression and neural networks the burial depth, activity and source extent of contamination was estimated and mapped. Furthermore, it was demonstrated that Machine Learning techniques could be operated in real-time to identify hazardous 226Ra containing hot particles with much greater confidence than current deterministic approaches such as the gross counting algorithm. It was concluded that remediation of 226Ra contaminated legacy sites could be greatly improved using the methods described in this thesis. Finally, Neural Networks were also applied to estimate the activity distribution of 137Cs, derived from the nuclear industry, in an estuarine environment. Findings demonstrated the method to be theoretically sound, but practically inconclusive, given that much of the contamination at the site was buried beyond the detection limits of the method. It was generally concluded that the noise posed by intrinsic counts in the 71 × 71 mm lanthanum bromide was too substantial to make any significant improvements over a comparable sodium iodide in contamination characterisation using 1 second counts.
226

Prediction of Code Lifetime

Nordfors, Per January 2017 (has links)
There are several previous studies in which machine learning algorithms are used to predict how fault-prone a piece of code is. This thesis takes on a slightly different approach by attempting to predict how long a piece of code will remain unmodified after being written (its “lifetime”). This is based on the hypothesis that frequently modified code is more likely to contain weaknesses, which may make lifetime predictions useful for code evaluation purposes. In this thesis, the predictions are made with machine learning algorithms which are trained on open source code examples from GitHub. Two different machine learning algorithms are used: the multilayer perceptron and the support vector machine. A piece of code is described by three groups of features: code contents, code properties obtained from static code analysis, and metadata from the version control system Git. In a series of experiments it is shown that the support vector machine is the best performing algorithm and that all three feature groups are useful for predicting lifetime. Both the multilayer perceptron and the support vector machine outperform a baseline prediction which always outputs the mean lifetime of the training set. This indicates that lifetime to some extent can be predicted based on information extracted from the code. However, lifetime prediction performance is shown to be highly dataset dependent with large error magnitudes.
227

Vers la segmentation automatique des organes à risque dans le contexte de la prise en charge des tumeurs cérébrales par l’application des technologies de classification de deep learning / Towards automatic segmentation of the organs at risk in brain cancer context via a deep learning classification scheme

Dolz, Jose 15 June 2016 (has links)
Les tumeurs cérébrales sont une cause majeure de décès et d'invalidité dans le monde, ce qui représente 14,1 millions de nouveaux cas de cancer et 8,2 millions de décès en 2012. La radiothérapie et la radiochirurgie sont parmi l'arsenal de techniques disponibles pour les traiter. Ces deux techniques s’appuient sur une irradiation importante nécessitant une définition précise de la tumeur et des tissus sains environnants. Dans la pratique, cette délinéation est principalement réalisée manuellement par des experts avec éventuellement un faible support informatique d’aide à la segmentation. Il en découle que le processus est fastidieux et particulièrement chronophage avec une variabilité inter ou intra observateur significative. Une part importante du temps médical s’avère donc nécessaire à la segmentation de ces images médicales. L’automatisation du processus doit permettre d’obtenir des ensembles de contours plus rapidement, reproductibles et acceptés par la majorité des oncologues en vue d'améliorer la qualité du traitement. En outre, toute méthode permettant de réduire la part médicale nécessaire à la délinéation contribue à optimiser la prise en charge globale par une utilisation plus rationnelle et efficace des compétences de l'oncologue.De nos jours, les techniques de segmentation automatique sont rarement utilisées en routine clinique. Le cas échéant, elles s’appuient sur des étapes préalables de recalages d’images. Ces techniques sont basées sur l’exploitation d’informations anatomiques annotées en amont par des experts sur un « patient type ». Ces données annotées sont communément appelées « Atlas » et sont déformées afin de se conformer à la morphologie du patient en vue de l’extraction des contours par appariement des zones d’intérêt. La qualité des contours obtenus dépend directement de la qualité de l’algorithme de recalage. Néanmoins, ces techniques de recalage intègrent des modèles de régularisation du champ de déformations dont les paramètres restent complexes à régler et la qualité difficile à évaluer. L’intégration d’outils d’assistance à la délinéation reste donc aujourd’hui un enjeu important pour l’amélioration de la pratique clinique.L'objectif principal de cette thèse est de fournir aux spécialistes médicaux (radiothérapeute, neurochirurgien, radiologue) des outils automatiques pour segmenter les organes à risque des patients bénéficiant d’une prise en charge de tumeurs cérébrales par radiochirurgie ou radiothérapie.Pour réaliser cet objectif, les principales contributions de cette thèse sont présentées sur deux axes principaux. Tout d'abord, nous considérons l'utilisation de l'un des derniers sujets d'actualité dans l'intelligence artificielle pour résoudre le problème de la segmentation, à savoir le «deep learning ». Cet ensemble de techniques présente des avantages par rapport aux méthodes d'apprentissage statistiques classiques (Machine Learning en anglais). Le deuxième axe est dédié à l'étude des caractéristiques d’images utilisées pour la segmentation (principalement les textures et informations contextuelles des images IRM). Ces caractéristiques, absentes des méthodes classiques d'apprentissage statistique pour la segmentation des organes à risque, conduisent à des améliorations significatives des performances de segmentation. Nous proposons donc l'inclusion de ces fonctionnalités dans un algorithme de réseau de neurone profond (deep learning en anglais) pour segmenter les organes à risque du cerveau.Nous démontrons dans ce travail la possibilité d'utiliser un tel système de classification basée sur techniques de « deep learning » pour ce problème particulier. Finalement, la méthodologie développée conduit à des performances accrues tant sur le plan de la précision que de l’efficacité. / Brain cancer is a leading cause of death and disability worldwide, accounting for 14.1 million of new cancer cases and 8.2 million deaths only in 2012. Radiotherapy and radiosurgery are among the arsenal of available techniques to treat it. Because both techniques involve the delivery of a very high dose of radiation, tumor as well as surrounding healthy tissues must be precisely delineated. In practice, delineation is manually performed by experts, or with very few machine assistance. Thus, it is a highly time consuming process with significant variation between labels produced by different experts. Radiation oncologists, radiology technologists, and other medical specialists spend, therefore, a substantial portion of their time to medical image segmentation. If by automating this process it is possible to achieve a more repeatable set of contours that can be agreed upon by the majority of oncologists, this would improve the quality of treatment. Additionally, any method that can reduce the time taken to perform this step will increase patient throughput and make more effective use of the skills of the oncologist.Nowadays, automatic segmentation techniques are rarely employed in clinical routine. In case they are, they typically rely on registration approaches. In these techniques, anatomical information is exploited by means of images already annotated by experts, referred to as atlases, to be deformed and matched on the patient under examination. The quality of the deformed contours directly depends on the quality of the deformation. Nevertheless, registration techniques encompass regularization models of the deformation field, whose parameters are complex to adjust, and its quality is difficult to evaluate. Integration of tools that assist in the segmentation task is therefore highly expected in clinical practice.The main objective of this thesis is therefore to provide radio-oncology specialists with automatic tools to delineate organs at risk of patients undergoing brain radiotherapy or stereotactic radiosurgery. To achieve this goal, main contributions of this thesis are presented on two major axes. First, we consider the use of one of the latest hot topics in artificial intelligence to tackle the segmentation problem, i.e. deep learning. This set of techniques presents some advantages with respect to classical machine learning methods, which will be exploited throughout this thesis. The second axis is dedicated to the consideration of proposed image features mainly associated with texture and contextual information of MR images. These features, which are not present in classical machine learning based methods to segment brain structures, led to improvements on the segmentation performance. We therefore propose the inclusion of these features into a deep network.We demonstrate in this work the feasibility of using such deep learning based classification scheme for this particular problem. We show that the proposed method leads to high performance, both in accuracy and efficiency. We also show that automatic segmentations provided by our method lie on the variability of the experts. Results demonstrate that our method does not only outperform a state-of-the-art classifier, but also provides results that would be usable in the radiation treatment planning.
228

Lane Change Intent Analysis for Preceding Vehicles : a Study Using Various Machine Learning Techniques / Analys av framförvarande fordons filbytesintentioner : En studie utnyttjande koncept från maskininlärning

Fredrik, Ljungberg January 2017 (has links)
In recent years, the level of technology in heavy duty vehicles has increased significantly. Progress has been made towards autonomous driving, with increaseddriver comfort and safety, partly by use of advanced driver assistance systems (ADAS). In this thesis the possibilities to detect and predict lane changes for the preceding vehicle are studied. This important information will help to improve the decision-making for safety systems. Some suitable approaches to solving the problem are presented, along with an evaluation of their related accuracies. The modelling of human perceptions and actions is a challenging task. Several thousand kilometers of driving data was available, and a reasonable course of action was to let the system learn from this off-line. For the thesis it was therefore decided to review the possibility to utilize a branch within the area of artificial intelligence, called supervised learning. The study of driving intentions was formulatedas a binary classification problem. To distinguish between lane-change and lane-keep actions, four machine learning-techniques were evaluated, namely naive Bayes, artificial neural networks, support vector machines and Gaussian processes. As input to the classifiers, fused sensor signals from today commercially accessible systems in Scania vehicles were used. The project was carried out within the boundaries of a Master’s Thesis projectin collaboration between Linköping University and Scania CV AB. Scania CV AB is a leading manufacturer of heavy trucks, buses and coaches, alongside industrialand marine engines.
229

Classification partiellement supervisée par SVM : application à la détection d’événements en surveillance audio / Partially Supervised Classification Based on SVM : application to Audio Events Detection for Surveillance

Lecomte, Sébastien 09 December 2013 (has links)
Cette thèse s’intéresse aux méthodes de classification par Machines à Vecteurs de Support (SVM) partiellement supervisées permettant la détection de nouveauté (One-Class SVM). Celles-ci ont été étudiées dans le but de réaliser la détection d’événements audio anormaux pour la surveillance d’infrastructures publiques, en particulier dans les transports. Dans ce contexte, l’hypothèse « ambiance normale » est relativement bien connue (même si les signaux correspondants peuvent être très non stationnaires). En revanche, tout signal « anormal » doit pouvoir être détecté et, si possible, regroupé avec les signaux de même nature. Ainsi, un système de référence s’appuyant sur une modélisation unique de l’ambiance normale est présenté, puis nous proposons d’utiliser plusieurs SVM de type One Class mis en concurrence. La masse de données à traiter a impliqué l’étude de solveurs adaptés à ces problèmes. Les algorithmes devant fonctionner en temps réel, nous avons également investi le terrain de l’algorithmie pour proposer des solveurs capables de démarrer à chaud. Par l’étude de ces solveurs, nous proposons une formulation unifiée des problèmes à une et deux classes, avec et sans biais. Les approches proposées ont été validées sur un ensemble de signaux réels. Par ailleurs, un démonstrateur intégrant la détection d’événements anormaux pour la surveillance de station de métro en temps réel a également été présenté dans le cadre du projet Européen VANAHEIM / This thesis addresses partially supervised Support Vector Machines for novelty detection (One-Class SVM). These have been studied to design abnormal audio events detection for supervision of public infrastructures, in particular public transportation systems. In this context, the null hypothesis (“normal” audio signals) is relatively well known (even though corresponding signals can be notably non stationary). Conversely, every “abnormal” signal should be detected and, if possible, clustered with similar signals. Thus, a reference system based on a single model of normal signals is presented, then we propose to use several concurrent One-Class SVM to cluster new data. Regarding the amount of data to process, special solvers have been studied. The proposed algorithms must be real time. This is the reason why we have also investigated algorithms with warm start capabilities. By the study of these algorithms, we have proposed a unified framework for One Class and Binary SVMs, with and without bias. The proposed approach has been validated on a database of real signals. The whole process applied to the monitoring of a subway station has been presented during the final review of the European Project VANAHEIM
230

Abnormal detection in video streams via one-class learning methods / Algorithmes d'apprentissage mono-classe pour la détection d'anomalies dans les flux vidéo

Wang, Tian 06 May 2014 (has links)
La vidéosurveillance représente l’un des domaines de recherche privilégiés en vision par ordinateur. Le défi scientifique dans ce domaine comprend la mise en œuvre de systèmes automatiques pour obtenir des informations détaillées sur le comportement des individus et des groupes. En particulier, la détection de mouvements anormaux de groupes d’individus nécessite une analyse fine des frames du flux vidéo. Dans le cadre de cette thèse, la détection de mouvements anormaux est basée sur la conception d’un descripteur d’image efficace ainsi que des méthodes de classification non linéaires. Nous proposons trois caractéristiques pour construire le descripteur de mouvement : (i) le flux optique global, (ii) les histogrammes de l’orientation du flux optique (HOFO) et (iii) le descripteur de covariance (COV) fusionnant le flux optique et d’autres caractéristiques spatiales de l’image. Sur la base de ces descripteurs, des algorithmes de machine learning (machines à vecteurs de support (SVM)) mono-classe sont utilisés pour détecter des événements anormaux. Deux stratégies en ligne de SVM mono-classe sont proposées : la première est basée sur le SVDD (online SVDD) et la deuxième est basée sur une version « moindres carrés » des algorithmes SVM (online LS-OC-SVM) / One of the major research areas in computer vision is visual surveillance. The scientific challenge in this area includes the implementation of automatic systems for obtaining detailed information about the behavior of individuals and groups. Particularly, detection of abnormal individual movements requires sophisticated image analysis. This thesis focuses on the problem of the abnormal events detection, including feature descriptor design characterizing the movement information and one-class kernel-based classification methods. In this thesis, three different image features have been proposed: (i) global optical flow features, (ii) histograms of optical flow orientations (HOFO) descriptor and (iii) covariance matrix (COV) descriptor. Based on these proposed descriptors, one-class support vector machines (SVM) are proposed in order to detect abnormal events. Two online strategies of one-class SVM are proposed: The first strategy is based on support vector description (online SVDD) and the second strategy is based on online least squares one-class support vector machines (online LS-OC-SVM)

Page generated in 0.0746 seconds