• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 9
  • 9
  • 9
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Applying Discriminant Functions with One-Class SVMs for Multi-Class Classification

Lee, Zhi-Ying 09 August 2007 (has links)
AdaBoost.M1 has been successfully applied to improve the accuracy of a learning algorithm for multi-class classification problems. However, it assumes that the performance of each base classifier must be better than 1/2, and this may be hard to achieve in practice for a multi-class problem. A new algorithm called AdaBoost.MK only requiring base classifiers better than a random guessing (1/k) is thus designed. Early SVM-based multi-class classification algorithms work by splitting the original problem into a set of two-class sub-problems. The time and space required by these algorithms are very demanding. In order to have low time and space complexities, we develop a base classifier that integrates one-class SVMs with discriminant functions. In this study, a hybrid method that integrates AdaBoost.MK and one-class SVMs with improved discriminant functions as the base classifiers is proposed to solve a multi-class classification problem. Experimental results on data sets from UCI and Statlog show that the proposed approach outperforms many popular multi-class algorithms including support vector clustering and AdaBoost.M1 with one-class SVMs as the base classifiers.
2

Performance of One-class Support Vector Machine (SVM) in Detection of Anomalies in the Bridge Data

Dalvi, Aditi January 2017 (has links)
No description available.
3

Machines à noyaux pour le filtrage d'alarmes : application à la discrimination multiclasse en environnement maritime / Kernels machines for alarm-filtering : application to multiclass discrimination in the naval context

Labbé, Benjamin 03 May 2011 (has links)
Les systèmes infrarouges sont essentiels pour fournir aux forces armées une capacité de reconnaissance des menaces. En contexte opérationnel, ces systèmes sont contraints au temps-réel et à l’accès à des taux de fausses alarmes faibles. Ceci implique la détection des menaces parmi de nombreux objets non-pertinents.Dans ce document, nous combinons des OneClass-SVM pour une décision multiclasse avec rejet(préservant la fausse-alarme). En apprentissage, nous sélectionnons les variables pour contrôler la parcimonie du moteur de décision.Nous présentons également un classifieur original, le Discriminative OneClass-SVM, combinant les propriétés du C-SVM et du OneClass-SVM dans le contexte multiclasse. Ce détecteur de nouveauté n’a pas de dépendance au nombre de classes. Ceci permet une utilisation sur des données à grande échelle.Nos expériences sur des données réelles démontrent l’intérêt des propositions pour les systèmes fortement contraints, face aux méthodes de référence. / Infrared systems are keys to provide automatic control of threats to military forces. Such operational systems are constrained to real-time processing and high efficiency (low false-alarm rate) implying the recognition of threats among numerous irrelevant objects.In this document, we combine OneClass Support Vector Machines (SVM) to discriminate in the multiclass framework and to reject unknown objects (preserving the false-alarm rate).While learning, we perform variable selection to control the sparsity of the decision functions. We also introduce a new classifier, the Discriminative OneClass-SVM. It combines properties of both the biclass-SVM and the OneClass-SVM in a multiclass framework. This classifier detects novelty and has no dependency to the amount of categories, allowing to tackle large scale problems. Numerical experiments, on real world infrared datasets, demonstrate the relevance of our proposals for highly constrained systems, when compared to standard methods.
4

Méthodes de classifications dynamiques et incrémentales : application à la numérisation cognitive d'images de documents / Incremental and dynamic learning for document image : application for intelligent cognitive scanning of documents

Ngo Ho, Anh Khoi 19 March 2015 (has links)
Cette thèse s’intéresse à la problématique de la classification dynamique en environnements stationnaires et non stationnaires, tolérante aux variations de quantités des données d’apprentissage et capable d’ajuster ses modèles selon la variabilité des données entrantes. Pour cela, nous proposons une solution faisant cohabiter des classificateurs one-class SVM indépendants ayant chacun leur propre procédure d’apprentissage incrémentale et par conséquent, ne subissant pas d’influences croisées pouvant émaner de la configuration des modèles des autres classificateurs. L’originalité de notre proposition repose sur l’exploitation des anciennes connaissances conservées dans les modèles de SVM (historique propre à chaque SVM représenté par l’ensemble des vecteurs supports trouvés) et leur combinaison avec les connaissances apportées par les nouvelles données au moment de leur arrivée. Le modèle de classification proposé (mOC-iSVM) sera exploité à travers trois variations exploitant chacune différemment l’historique des modèles. Notre contribution s’inscrit dans un état de l’art ne proposant pas à ce jour de solutions permettant de traiter à la fois la dérive de concepts, l’ajout ou la suppression de concepts, la fusion ou division de concepts, tout en offrant un cadre privilégié d’interactions avec l’utilisateur. Dans le cadre du projet ANR DIGIDOC, notre approche a été appliquée sur plusieurs scénarios de classification de flux d’images pouvant survenir dans des cas réels lors de campagnes de numérisation. Ces scénarios ont permis de valider une exploitation interactive de notre solution de classification incrémentale pour classifier des images arrivant en flux afin d’améliorer la qualité des images numérisées. / This research contributes to the field of dynamic learning and classification in case of stationary and non-stationary environments. The goal of this PhD is to define a new classification framework able to deal with very small learning dataset at the beginning of the process and with abilities to adjust itself according to the variability of the incoming data inside a stream. For that purpose, we propose a solution based on a combination of independent one-class SVM classifiers having each one their own incremental learning procedure. Consequently, each classifier is not sensitive to crossed influences which can emanate from the configuration of the models of the other classifiers. The originality of our proposal comes from the use of the former knowledge kept in the SVM models (represented by all the found support vectors) and its combination with the new data coming incrementally from the stream. The proposed classification model (mOC-iSVM) is exploited through three variations in the way of using the existing models at each step of time. Our contribution states in a state of the art where no solution is proposed today to handle at the same time, the concept drift, the addition or the deletion of concepts, the fusion or division of concepts while offering a privileged solution for interaction with the user. Inside the DIGIDOC project, our approach was applied to several scenarios of classification of images streams which can correspond to real cases in digitalization projects. These different scenarios allow validating an interactive exploitation of our solution of incremental classification to classify images coming in a stream in order to improve the quality of the digitized images.
5

DIAGNÓSTICO DE DIABETES TIPO II POR CODIFICAÇÃO EFICIENTE E MÁQUINAS DE VETOR DE SUPORTE / DIAGNOSIS OF DIABETES TYPE II BY EFFICIENT CODING AND VECTOR MACHINE SUPPORT

Ribeiro, Aurea Celeste da Costa 30 June 2009 (has links)
Made available in DSpace on 2016-08-17T14:53:05Z (GMT). No. of bitstreams: 1 Aurea Celeste da Costa Ribeiro.pdf: 590401 bytes, checksum: 1ec80bb8ac1a3e674ff49966fa9b383c (MD5) Previous issue date: 2009-06-30 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Diabetes is a disease caused by the pancreas failing to produce insulin. It is incurable and its treatment is based on a diet, exercise and drugs. The costs for diagnosis and human resources for it have become high and ine±cient. Computer- aided design (CAD) systems are essential to solve this problem. Our study proposes a CAD system based on the one-class support vector machine (SVM) method and the eficient coding with independent component analysis (ICA) to classify a patient's data set in diabetics or non-diabetics. First, the classification tests were done using both non-invasive and invasive characteristics of the disease. Then, we made one test without the invasive characteristics: plasma glucose concentration and 2-Hour serum insulin (mu U/ml), which use blood samples. We have obtained an accuracy of 99.84% and 99.28%, respectively. Other tests were made without the invasive characteristics, also excluding one non-invasive characteristic at a time, to observe the influence of each one in the final results. / Diabetes é uma doença causada pela falência do pâncreas em produzir insulina, é incurável e seu tratamento é baseado em dietas, exercícios e remédios. Os custos com o tratamento, diagnóstico na população e combate da doença tornam-se cada vez mais altos. Sistemas de auxíio ao diagnóstico da doença são uma das soluções para ajudar na diminuição dos custos com a doença. Nosso método propõe um sistema de auxílio de diagnóstico baseado nas máquinas de vetor de suporte para uma classe e na codificação eficiente através da análise de componentes independentes para classificar uma base de dados de pacientes em diabéticos e não-diabéticos. Primeiramente, foram feitos testes de classificação com as características não- invasivas e invasivas da base de dados juntas. Em seguida, fizemos um teste sem as características invasivas da base de dados, que são glicose e insulina em jejum, que são feitas com a coleta sanguínea. Obteve-se uma taxa de acurácia de 99,84% e 99,28%, respectivamente. Outros testes foram feitos sem as características invasivas, tirando uma característica não-invasiva por vez, com o fim de observar a influência de cada uma no resultado final.
6

Comparing Anomaly-Based Network Intrusion Detection Approaches Under Practical Aspects

Helmrich, Daniel 07 July 2021 (has links)
While many of the currently used network intrusion detection systems (NIDS) employ signature-based approaches, there is an increasing research interest in the examination of anomaly-based detection methods, which seem to be more suited for recognizing zero-day attacks. Nevertheless, requirements for their practical deployment, as well as objective and reproducible evaluation methods, are hereby often neglected. The following thesis defines aspects that are crucial for a practical evaluation of anomaly-based NIDS, such as the focus on modern attack types, the restriction to one-class classification methods, the exclusion of known attacks from the training phase, a low false detection rate, and consideration of the runtime efficiency. Based on those principles, a framework dedicated to developing, testing and evaluating models for the detection of network anomalies is proposed. It is applied to two datasets featuring modern traffic, namely the UNSW-NB15 and the CIC-IDS-2017 datasets, in order to compare and evaluate commonly-used network intrusion detection methods. The implemented approaches include, among others, a highly configurable network flow generator, a payload analyser, a one-hot encoder, a one-class support vector machine, and an autoencoder. The results show a significant difference between the two chosen datasets: While for the UNSW-NB15 dataset several reasonably well performing model combinations for both the autoencoder and the one-class SVM can be found, most of them yield unsatisfying results when the CIC-IDS-2017 dataset is used. / Obwohl viele der derzeit genutzten Systeme zur Erkennung von Netzwerkangriffen (engl. NIDS) signaturbasierte Ansätze verwenden, gibt es ein wachsendes Forschungsinteresse an der Untersuchung von anomaliebasierten Erkennungsmethoden, welche zur Identifikation von Zero-Day-Angriffen geeigneter erscheinen. Gleichwohl werden hierbei Bedingungen für deren praktischen Einsatz oft vernachlässigt, ebenso wie objektive und reproduzierbare Evaluationsmethoden. Die folgende Arbeit definiert Aspekte, die für eine praxisorientierte Evaluation unabdingbar sind. Dazu zählen ein Schwerpunkt auf modernen Angriffstypen, die Beschränkung auf One-Class Classification Methoden, der Ausschluss von bereits bekannten Angriffen aus dem Trainingsdatensatz, niedrige Falscherkennungsraten sowie die Berücksichtigung der Laufzeiteffizienz. Basierend auf diesen Prinzipien wird ein Rahmenkonzept vorgeschlagen, das für das Entwickeln, Testen und Evaluieren von Modellen zur Erkennung von Netzwerkanomalien bestimmt ist. Dieses wird auf zwei Datensätze mit modernem Netzwerkverkehr, namentlich auf den UNSW-NB15 und den CIC-IDS- 2017 Datensatz, angewendet, um häufig genutzte NIDS-Methoden zu vergleichen und zu evaluieren. Die für diese Arbeit implementierten Ansätze beinhalten, neben anderen, einen weit konfigurierbaren Netzwerkflussgenerator, einen Nutzdatenanalysierer, einen One-Hot-Encoder, eine One-Class Support Vector Machine sowie einen Autoencoder. Die Resultate zeigen einen großen Unterschied zwischen den beiden ausgewählten Datensätzen: Während für den UNSW-NB15 Datensatz verschiedene angemessen gut funktionierende Modellkombinationen, sowohl für den Autoencoder als auch für die One-Class SVM, gefunden werden können, bringen diese für den CIC-IDS-2017 Datensatz meist unbefriedigende Ergebnisse.
7

Machine Learning Based Failure Detection in Data Centers

Piran Nanekaran, Negin January 2020 (has links)
This work proposes a new approach to fast detection of abnormal behaviour of cooling, IT, and power distribution systems in micro data centers based on machine learning techniques. Conventional protection of micro data centers focuses on monitoring individual parameters such as temperature at different locations and when these parameters reach certain high values, then an alarm will be triggered. This research employs machine learning techniques to extract normal and abnormal behaviour of the cooling and IT systems. Developed data acquisition system together with unsupervised learning methods quickly learns the physical dynamics of normal operation and can detect deviations from such behaviours. This provides an efficient way for not only producing health index for the micro data center, but also a rich label logging system that will be used for the supervised learning methods. The effectiveness of the proposed detection technique is evaluated on an micro data center placed at Computing Infrastructure Research Center (CIRC) in McMaster Innovation Park (MIP), McMaster University. / Thesis / Master of Science (MSc)
8

Deep Learning Empowered Unsupervised Contextual Information Extraction and its applications in Communication Systems

Gusain, Kunal 16 January 2023 (has links)
Master of Science / There has been an astronomical increase in data at the network edge due to the rapid development of 5G infrastructure and the proliferation of the Internet of Things (IoT). In order to improve the network controller's decision-making capabilities and improve the user experience, it is of paramount importance to properly analyze this data. However, transporting such a large amount of data from edge devices to the network controller requires large bandwidth and increased latency, presenting a significant challenge to resource-constrained wireless networks. By using information processing techniques, one could effectively address this problem by sending only pertinent and critical information to the network controller. Nevertheless, finding critical information from high-dimensional observation is not an easy task, especially when large amounts of background information are present. Our thesis proposes to extract critical but low-dimensional information from high-dimensional observations using an information-theoretic deep learning framework. We focus on two distinct problems where critical information extraction is imperative. In the first problem, we study the problem of feature extraction from video frames collected in a dynamic environment and showcase its effectiveness using a video game simulation experiment. In the second problem, we investigate the detection of anomaly signals in the spectrum by extracting and analyzing useful features from spectrograms. Using extensive simulation experiments based on a practical data set, we conclude that our proposed approach is highly effective in detecting anomaly signals in a wide range of signal-to-noise ratios.
9

Privacy preserving software engineering for data driven development

Tongay, Karan Naresh 14 December 2020 (has links)
The exponential rise in the generation of data has introduced many new areas of research including data science, data engineering, machine learning, artificial in- telligence to name a few. It has become important for any industry or organization to precisely understand and analyze the data in order to extract value out of the data. The value of the data can only be realized when it is put into practice in the real world and the most common approach to do this in the technology industry is through software engineering. This brings into picture the area of privacy oriented software engineering and thus there is a rise of data protection regulation acts such as GDPR (General Data Protection Regulation), PDPA (Personal Data Protection Act), etc. Many organizations, governments and companies who have accumulated huge amounts of data over time may conveniently use the data for increasing business value but at the same time the privacy aspects associated with the sensitivity of data especially in terms of personal information of the people can easily be circumvented while designing a software engineering model for these types of applications. Even before the software engineering phase for any data processing application, often times there can be one or many data sharing agreements or privacy policies in place. Every organization may have their own way of maintaining data privacy practices for data driven development. There is a need to generalize or categorize their approaches into tactics which could be referred by other practitioners who are trying to integrate data privacy practices into their development. This qualitative study provides an understanding of various approaches and tactics that are being practised within the industry for privacy preserving data science in software engineering, and discusses a tool for data usage monitoring to identify unethical data access. Finally, we studied strategies for secure data publishing and conducted experiments using sample data to demonstrate how these techniques can be helpful for securing private data before publishing. / Graduate

Page generated in 0.0781 seconds