• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 38
  • 38
  • 22
  • 17
  • 15
  • 8
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

New Insights into Decision Trees Ensembles / Nouveaux apports dans l'apprentissage par ensembles d'arbres

Pisetta, Vincent 28 March 2012 (has links)
Les ensembles d’arbres constituent à l’heure actuelle l’une des méthodes d’apprentissage statistique les plus performantes. Toutefois, leurs propriétés théoriques, ainsi que leurs performances empiriques restent sujettes à de nombreuses questions. Nous proposons dans cette thèse d’apporter un nouvel éclairage à ces méthodes. Plus particulièrement, après avoir évoqué les aspects théoriques actuels (chapitre 1) de trois schémas ensemblistes principaux (Forêts aléatoires, Boosting et Discrimination Stochastique), nous proposerons une analyse tendant vers l’existence d’un point commun au bien fondé de ces trois principes (chapitre 2). Ce principe tient compte de l’importance des deux premiers moments de la marge dans l’obtention d’un ensemble ayant de bonnes performances. De là, nous en déduisons un nouvel algorithme baptisé OSS (Oriented Sub-Sampling) dont les étapes sont en plein accord et découlent logiquement du cadre que nous introduisons. Les performances d’OSS sont empiriquement supérieures à celles d’algorithmes en vogue comme les Forêts aléatoires et AdaBoost. Dans un troisième volet (chapitre 3), nous analysons la méthode des Forêts aléatoires en adoptant un point de vue « noyau ». Ce dernier permet d’améliorer la compréhension des forêts avec, en particulier la compréhension et l’observation du mécanisme de régularisation de ces techniques. Le fait d’adopter un point de vue noyau permet d’améliorer les Forêts aléatoires via des méthodes populaires de post-traitement comme les SVM ou l’apprentissage de noyaux multiples. Ceux-ci démontrent des performances nettement supérieures à l’algorithme de base, et permettent également de réaliser un élagage de l’ensemble en ne conservant qu’une petite partie des classifieurs le composant. / Decision trees ensembles are among the most popular tools in machine learning. Nevertheless, their theoretical properties as well as their empirical performances are subject to strong investigation up to date. In this thesis, we propose to shed light on these methods. More precisely, after having described the current theoretical aspects of three main ensemble schemes (chapter 1), we give an analysis supporting the existence of common reasons to the success of these three principles (chapter 2). This last takes into account the two first moments of the margin as an essential ingredient to obtain strong learning abilities. Starting from this rejoinder, we propose a new ensemble algorithm called OSS (Oriented Sub-Sampling) whose steps are in perfect accordance with the point of view we introduce. The empirical performances of OSS are superior to the ones of currently popular algorithms such as Random Forests and AdaBoost. In a third chapter (chapter 3), we analyze Random Forests adopting a “kernel” point of view. This last allows us to understand and observe the underlying regularization mechanism of these kinds of methods. Adopting the kernel point of view also enables us to improve the predictive performance of Random Forests using popular post-processing techniques such as SVM and multiple kernel learning. In conjunction with random Forests, they show greatly improved performances and are able to realize a pruning of the ensemble by conserving only a small fraction of the initial base learners.
12

A Boosted-Window Ensemble

Elahi, Haroon January 2014 (has links)
Context. The problem of obtaining predictions from stream data involves training on the labeled instances and suggesting the class values for the unseen stream instances. The nature of the data-stream environments makes this task complicated. The large number of instances, the possibility of changes in the data distribution, presence of noise and drifting concepts are just some of the factors that add complexity to the problem. Various supervised-learning algorithms have been designed by putting together efficient data-sampling, ensemble-learning, and incremental-learning methods. The performance of the algorithm is dependent on the chosen methods. This leaves an opportunity to design new supervised-learning algorithms by using different combinations of constructing methods. Objectives. This thesis work proposes a fast and accurate supervised-learning algorithm for performing predictions on the data-streams. This algorithm is called as Boosted-Window Ensemble (BWE), which is invented using the mixture-of-experts technique. BWE uses Sliding Window, Online Boosting and incremental-learning for data-sampling, ensemble-learning, and maintaining a consistent state with the current stream data, respectively. In this regard, a sliding window method is introduced. This method uses partial-updates for sliding the window on the data-stream and is called Partially-Updating Sliding Window (PUSW). The investigation is carried out to compare two variants of sliding window and three different ensemble-learning methods for choosing the superior methods. Methods. The thesis uses experimentation approach for evaluating the Boosted-Window Ensemble (BWE). CPU-time and the Prediction accuracy are used as performance indicators, where CPU-time is the execution time in seconds. The benchmark algorithms include: Accuracy-Updated Ensemble1 (AUE1), Accuracy-Updated Ensemble2 (AUE2), and Accuracy-Weighted Ensemble (AWE). The experiments use nine synthetic and five real-world datasets for generating performance estimates. The Asymptotic Friedman test and the Wilcoxon Signed-Rank test are used for hypothesis testing. The Wilcoxon-Nemenyi-McDonald-Thompson test is used for performing post-hoc analysis. Results. The hypothesis testing suggests that: 1) both for the synthetic and real-wrold datasets, the Boosted Window Ensemble (BWE) has significantly lower CPU-time values than two benchmark algorithms (Accuracy-updated Ensemble1 (AUE1) and Accuracy-weighted Ensemble (AWE). 2) BWE returns similar prediction accuracy as AUE1 and AWE for synthetic datasets. 3) BWE returns similar prediction accuracy as the three benchmark algorithms for the real-world datasets. Conclusions. Experimental results demonstrate that the proposed algorithm can be as accurate as the state-of-the-art benchmark algorithms, while obtaining predictions from the stream data. The results further show that the use of Partially-Updating Sliding Window has resulted in lower CPU-time for BWE as compared with the chunk-based sliding window method used in AUE1, AUE2, and AWE.
13

An investigation of ensemble methods to improve the bias and/or variance of option pricing models based on Lévy processes

Steinki, Oliver January 2015 (has links)
This thesis introduces a novel theoretical option pricing ensemble framework to improve the bias and variance of option pricing models, especially those based on Levy Processes. In particular, we present a completely new, yet very general theoretical framework to calibrate and combine several option pricing models using ensemble methods. This framework has four main steps: general option pricing tasks, ensemble generation, ensemble pruning and ensemble integration. The modularity allows for a exible implementation in terms of asset classes, base models, pricing techniques and ensemble architecture.
14

Ensemble Learning Techniques for Structured and Unstructured Data

King, Michael Allen 01 April 2015 (has links)
This research provides an integrated approach of applying innovative ensemble learning techniques that has the potential to increase the overall accuracy of classification models. Actual structured and unstructured data sets from industry are utilized during the research process, analysis and subsequent model evaluations. The first research section addresses the consumer demand forecasting and daily capacity management requirements of a nationally recognized alpine ski resort in the state of Utah, in the United States of America. A basic econometric model is developed and three classic predictive models evaluated the effectiveness. These predictive models were subsequently used as input for four ensemble modeling techniques. Ensemble learning techniques are shown to be effective. The second research section discusses the opportunities and challenges faced by a leading firm providing sponsored search marketing services. The goal for sponsored search marketing campaigns is to create advertising campaigns that better attract and motivate a target market to purchase. This research develops a method for classifying profitable campaigns and maximizing overall campaign portfolio profits. Four traditional classifiers are utilized, along with four ensemble learning techniques, to build classifier models to identify profitable pay-per-click campaigns. A MetaCost ensemble configuration, having the ability to integrate unequal classification cost, produced the highest campaign portfolio profit. The third research section addresses the management challenges of online consumer reviews encountered by service industries and addresses how these textual reviews can be used for service improvements. A service improvement framework is introduced that integrates traditional text mining techniques and second order feature derivation with ensemble learning techniques. The concept of GLOW and SMOKE words is introduced and is shown to be an objective text analytic source of service defects or service accolades. / Ph. D.
15

Innovations of random forests for longitudinal data

Wonkye, Yaa Tawiah 07 August 2019 (has links)
No description available.
16

Random Forest Analogues for Mixture Discriminant Analysis

Mallo, Muz 09 June 2022 (has links)
Finite mixture modelling is a powerful and well-developed paradigm, having proven useful in unsupervised learning and, to a lesser extent supervised learning (mixture discriminant analysis), especially in the case(s) of data with local variation and/or latent variables. It is the aim of this thesis to improve upon mixture discriminant analysis by introducing two types of random forest analogues which are called Mix- Forests. The first MixForest is based on Gaussian mixture models from the famous family of Gaussian parsimonious clustering models and will be useful in classify- ing lower dimensional data. The second MixForest extends the technique to higher dimensional data via the use of mixtures of factor analyzers from the well-known family of parsimonious Gaussian mixture models. MixForests will be utilized in the analysis of real data to demonstrate potential increases in classification accuracy as well as inferential procedures such as generalization error estimation and variable importance measures. / Thesis / Doctor of Philosophy (PhD)
17

Predicting the Stock Market Using News Sentiment Analysis

Memari, Majid 01 May 2018 (has links) (PDF)
ABSTRACT MAJID MEMARI, for the Masters of Science degree in Computer Science, presented on November 3rd, 2017 at Southern Illinois University, Carbondale, IL. Title: PREDICTING THE STOCK MARKET USING NEWS SENTIMENT ANALYSIS Major Professor: Dr. Norman Carver Big data is a term for data sets that are so large or complex that traditional data processing application software is inadequate to deal with them. GDELT is the largest, most comprehensive, and highest resolution open database ever created. It is a platform that monitors the world's news media from nearly every corner of every country in print, broadcast, and web formats, in over 100 languages, every moment of every day that stretches all the way back to January 1st, 1979, and updates daily [1]. Stock market prediction is the act of trying to determine the future value of a company stock or other financial instrument traded on an exchange. The successful prediction of a stock's future price could yield significant profit. The efficient-market hypothesis suggests that stock prices reflect all currently available information and any price changes that are not based on newly revealed information thus are inherently unpredictable [2]. On the other hand, other studies show that it is predictable. The stock market prediction has been a long-time attractive topic and is extensively studied by researchers in different fields with numerous studies of the correlation between stock market fluctuations and different data sources derived from the historical data of world major stock indices or external information from social media and news [6]. The main objective of this research is to investigate the accuracy of predicting the unseen prices of the Dow Jones Industrial Average using information derived from GDELT database. Dow Jones Industrial Average (DJIA) is a stock market index, and one of several indices created by Wall Street Journal editor and Dow Jones & Company co-founder Charles Dow. This research is based on data sets of events from GDELT database and daily prices of the DJI from Yahoo Finance, all from March 2015 to October 2017. First, multiple different classification machine learning models are applied to the generated datasets and then also applied to multiple different Ensemble methods. In statistics and machine learning, Ensemble methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone. Afterwards, performances are evaluated for each model using the optimized parameters. Finally, experimental results show that using Ensemble methods has a significant (positive) impact on improving the prediction accuracy. Keywords: Big Data, GDELT, Stock Market, Prediction, Dow Jones Index, Machine Learning, Ensemble Methods
18

Machine Learning and Multivariate Statistics for Optimizing Bioprocessing and Polyolefin Manufacturing

Agarwal, Aman 07 January 2022 (has links)
Chemical engineers have routinely used computational tools for modeling, optimizing, and debottlenecking chemical processes. Because of the advances in computational science over the past decade, multivariate statistics and machine learning have become an integral part of the computerization of chemical processes. In this research, we look into using multivariate statistics, machine learning tools, and their combinations through a series of case studies including a case with a successful industrial deployment of machine learning models for fermentation. We use both commercially-available software tools, Aspen ProMV and Python, to demonstrate the feasibility of the computational tools. This work demonstrates a novel application of ensemble-based machine learning methods in bioprocessing, particularly for the prediction of different fermenter types in a fermentation process (to allow for successful data integration) and the prediction of the onset of foaming. We apply two ensemble frameworks, Extreme Gradient Boosting (XGBoost) and Random Forest (RF), to build classification and regression models. Excessive foaming can interfere with the mixing of reactants and lead to problems, such as decreasing effective reactor volume, microbial contamination, product loss, and increased reaction time. Physical modeling of foaming is an arduous process as it requires estimation of foam height, which is dynamic in nature and varies for different processes. In addition to foaming prediction, we extend our work to control and prevent foaming by allowing data-driven ad hoc addition of antifoam using exhaust differential pressure as an indicator of foaming. We use large-scale real fermentation data for six different types of sporulating microorganisms to predict foaming over multiple strains of microorganisms and build exploratory time-series driven antifoam profiles for four different fermenter types. In order to successfully predict the antifoam addition from the large-scale multivariate dataset (about half a million instances for 163 batches), we use TPOT (Tree-based Pipeline Optimization Tool), an automated genetic programming algorithm, to find the best pipeline from 600 other pipelines. Our antifoam profiles are able to decrease hourly volume retention by over 53% for a specific fermenter. A decrease in hourly volume retention leads to an increase in fermentation product yield. We also study two different cases associated with the manufacturing of polyolefins, particularly LDPE (low-density polyethylene) and HDPE (high-density polyethylene). Through these cases, we showcase the usage of machine learning and multivariate statistical tools to improve process understanding and enhance the predictive capability for process optimization. By using indirect measurements such as temperature profiles, we demonstrate the viability of such measures in the prediction of polyolefin quality parameters, anomaly detection, and statistical monitoring and control of the chemical processes associated with a LDPE plant. We use dimensionality reduction, visualization tools, and regression analysis to achieve our goals. Using advanced analytical tools and a combination of algorithms such as PCA (Principal Component Analysis), PLS (Partial Least Squares), Random Forest, etc., we identify predictive models that can be used to create inferential schemes. Soft-sensors are widely used for on-line monitoring and real-time prediction of process variables. In one of our cases, we use advanced machine learning algorithms to predict the polymer melt index, which is crucial in determining the product quality of polymers. We use real industrial data from one of the leading chemical engineering companies in the Asia-Pacific region to build a predictive model for a HDPE plant. Lastly, we show an end-to-end workflow for deep learning on both industrial and simulated polyolefin datasets. Thus, using these five cases, we explore the usage of advanced machine learning and multivariate statistical techniques in the optimization of chemical and biochemical processes. The recent advances in computational hardware allow engineers to design such data-driven models, which enhances their capacity to effectively and efficiently monitor and control a process. We showcase that even non-expert chemical engineers can implement such machine learning algorithms with ease using open-source or commercially available software tools. / Doctor of Philosophy / Most chemical and biochemical processes are equipped with advanced probes and connectivity sensors that collect large amounts of data on a daily basis. It is critical to manage and utilize the significant amount of data collected from the start and throughout the development and manufacturing cycle. Chemical engineers have routinely used computational tools for modeling, designing, optimizing, debottlenecking, and troubleshooting chemical processes. Herein, we present different applications of machine learning and multivariate statistics using industrial datasets. This dissertation also includes a deployed industrial solution to mitigate foaming in commercial fermentation reactors as a proof-of-concept (PoC). Our antifoam profiles are able to decrease volume loss by over 53% for a specific fermenter. Throughout this dissertation, we demonstrate applications of several techniques like ensemble methods, automated machine learning, exploratory time series, and deep learning for solving industrial problems. Our aim is to bridge the gap from industrial data acquisition to finding meaningful insights for process optimization.
19

Σχεδιασμός, υλοποίηση και εφαρμογή μεθόδων υπολογιστικής νοημοσύνης για την πρόβλεψη παθογόνων μονονουκλεοτιδικών πολυμορφισμών

Ραπακούλια, Τρισεύγενη 11 October 2013 (has links)
Η πιο απλή μορφή γενετικής διαφοροποίησης στον άνθρωπο είναι οι μονονουκλεοτιδικοί πολυμορφισμοί (Single Nucleotide Polymorphisms - SNPs). Ο αριθμός αυτού του είδους πολυμορφισμών που έχουν βρεθεί στο ανθρώπινο γονιδίωμα και επηρεάζουν την παραγόμενη πρωτεΐνη αυξάνεται συνεχώς, αλλά η αντιστοίχηση τους σε πιθανές ασθένειες με πειραματικές μεθόδους είναι ασύμφορη από θέμα χρόνου και κόστους. Για αυτό τον λόγο έχουν αναπτυχθεί διάφορες υπολογιστικές μέθοδοι με σκοπό να ταξινομήσουν τους μονονουκλεοτιδικούς πολυμορφισμούς σε παθογόνους και μη. Οι περισσότερες από αυτές τις μεθόδους χρησιμοποιούν ταξινομητές, οι οποίοι παίρνοντας σαν είσοδο ένα σύνολο δομικών, λειτουργικών, ακολουθιακών και εξελικτικών χαρακτηριστικών, επιχειρούν να προβλέψουν αν ένας μονονουκλεοτιδικός πολυμορφισμός είναι παθογόνος ή μη. Για την εκπαίδευση αυτών των ταξινομητών, χρησιμοποιούνται δύο σύνολα μονονουκλεοτιδικών πολυμορφισμών. Το πρώτο αποτελείται από μονονουκλεοτιδικούς πολυμορφισμούς που έχει βρεθεί πειραματικά ότι οδηγούν σε παθογένεια και το δεύτερο από μονονουκλεοτιδικούς πολυμορφισμούς που έχει αποδειχθεί πειραματικά ότι είναι αδρανείς. Οι μέθοδοι αυτές διαφέρουν στα χαρακτηριστικά των μεταλλάξεων που λαμβάνουν υπόψη στην πρόβλεψη τους, καθώς επίσης και στην εκπαίδευση και τη φύση των τεχνικών ταξινόμησης, που χρησιμοποιούν για τη λήψη των αποφάσεων. Το βασικότερο προβλήματα τους ωστόσο έγκειται στο γεγονός ότι καθορίζουν τα χαρακτηριστικά, που θα χρησιμοποιήσουν σαν είσοδο στους ταξινομητές τους με τρόπο εμπειρικό και μάλιστα διαφορετικές μέθοδοι προτείνουν και χρησιμοποιούν διαφορετικά χαρακτηριστικά, χωρίς να τεκμηριώνουν επαρκώς τις αιτίες αυτής της διαφοροποίησης. Δύο ακόμα προβλήματα που δεν έχουν καταφέρει να αντιμετωπίσουν οι υπάρχουσες μεθοδολογίες είναι το πρόβλημα της ανισορροπίας των δύο κλάσεων ταξινόμησης και των ελλιπών τιμών σε πολλά από τα χαρακτηριστικά εισόδου των ταξινομητών, ώστε να επιτυγχάνουν πιο ακριβή και αξιόπιστα αποτελέσματα. Από τα παραπάνω είναι ξεκάθαρο πως υπάρχει μεγάλο περιθώριο βελτίωσης των υπάρχουσων μεθοδολογιών για το συγκεκριμένο πρόβλημα ταξινόμησης. Στην παρούσα διπλωματική εργασία προτείνουμε μια νέα υβριδική μεθοδολογία υπολογιστικής νοημοσύνης, που ξεπερνά πολλά από τα προβλήματα των υπάρχοντων μεθοδολογιών και βελτιώνει με τον τρόπο αυτό την απόδοσή τους. Δύο είναι τα βασικά βήματα που ακολουθήσαμε για την επίτευξη του στόχου αυτού. Πρώτον, συγκεντρώσαμε από τις διαθέσιμες δημόσιες βάσεις δεδομένων, τους μονονουκλεοτιδικούς πολυμορφισμούς που χρησιμοποιήθηκαν για την εκπαίδευση και τον έλεγχο των μοντέλων μηχανικής μάθησης. Συγκεκριμένα, συλλέχθησαν και φιλτραρίστηκαν τα θετικά και αρνητικά σύνολα εκπαίδευσης και ελέγχου, που αποτελούνται από μονονουκλεοτιδικούς πολυμορφισμούς που είτε οδηγούν σε παθογένεια, είτε είναι ουδέτεροι. Για κάθε πολυμορφισμό των δύο συνόλων υπολογίσαμε χρησιμοποιώντας υπάρχοντα διαθέσιμα εργαλεία όσο το δυνατό περισσότερα δομικά, λειτουργικά, ακολουθιακά και εξελικτικά χαρακτηριστικά. Για εκείνα τα χαρακτηριστικά, για τα οποία δεν υπήρχε κάποιο διαθέσιμο εργαλείο υπολογισμού τους, υλοποιήσαμε τον κατάλληλο κώδικα για τον υπολογισμό τους. Το δεύτερο βήμα της διπλωματικής αφορούσε το σχεδιασμό και την υλοποίηση της κατάλληλης υβριδικής μεθόδου για την επίλυση του προβλήματος που μελετάμε. Χρησιμοποιήσαμε μια νέα μέθοδο ταξινόμησης την EnsembleGASVR. Πρόκειται για μια ensemble μεθοδολογία, που συνδυάζει σε ένα ενιαίο πλαίσιο ταξινόμησης οκτώ διαφορετικούς ταξινομητές. Κάθε ένας από αυτούς τους ταξινομητές βασίζεται στον υβριδικό συνδυασμό των Γενετικών Αλγορίθμων και των μοντέλων Παλινδρόμησης Διανυσμάτων Υποστήριξης (nu-Support Vector Regression). Συγκεκριμένα ένας Προσαρμοζόμενος Γενετικός Αλγόριθμος χρησιμοποιείται για να καθοριστεί το βέλτιστο υποσύνολο χαρακτηριστικών, καθώς και οι βέλτιστες τιμές των παραμέτρων των ταξινομητών. Σαν μέθοδο ταξινόμησης των μεταλλάξεων σε ουδέτερες και παθογενείς, προτείνουμε τον nu-SVR ταξινομητή, καθώς παρουσιάζει υψηλή απόδοση, καλή γενίκευση, δεν παγιδεύεται σε τοπικά βέλτιστα, ενώ ταυτόχρονα επιτυγχάνει την ισορροπία μεταξύ της ακρίβειας και της πολυπλοκότητας του μοντέλου. Μάλιστα για να ξεπεράσουμε τα πρόβληματα των ελλιπών τιμών και της ανισορροπίας των δύο κλάσεων ταξινόμησης, αλλά και για να βελτιώσουμε τη συνολική απόδοση της μεθοδολογίας μας, επεκτείναμε τον υβριδικό αλγόριθμο, ώστε να λειτουργεί σαν μία ensemble-συλλογική τεχνική, συνδυάζοντας οκτώ επί μέρους μοντέλα ταξινόμησης. Τα πειραματικά αποτελέσματα της προτεινόμενης μεθοδολογίας ήταν εξαιρετικά ελπιδοφόρα, καθώς η EnsembleGASVR μεθοδολογία υπερτερεί σημαντικά έναντι άλλων ευρέως γνωστών μεθόδων ταξινόμησης παθογενών μεταλλάξεων. / Single Nucleotide Polymorphisms (SNPs) are the most common form of genetic variations in humans. The number of SNPs that have been found in human genome and affect protein functionality is constantly increasing. Finding matches between SNPs and diseases using experimental techniques, is excessive disadvantageous in terms of time and cost. For this reason, several computational methods have been developed. These methods classify polymorphisms as pathogenic and non-pathogenic. Most of them use classifiers, which take as input a set of structural, functional, sequential and evolutionary features and predict whether a single nucleotide polymorphism is pathogenic or neutral. For training these classifiers use two sets of SNPs. The first one consists of SNPs that have been experimentally proven as pathogenic, whereas the second set consists of SNPs that have been experimentally characterized as benign. These methods differ in the classification methods they deploy and in the features they use as inputs. However, the main problem is the determination of an empirically verified set of features for training. Specifically, different methods suggest different feature sets, without adequately documenting the causes of this differentiation. In addition, the existing methodologies do not tackle efficiently the class imbalance problem between positive and negative training sets and the problem of missing values in the datasets. In this thesis a new hybrid computational intelligence methodology is proposed, that overcomes many of the problems of existing methodologies. The proposed method achieves high classification performance and systematizes the selection of relevant features. In the first phase of this study the polymorphisms were gathered from the available public databases and they were used for training and testing of the machine learning models. Specifically, the positive and negative training and test sets were collected and filtered. They consist of single nucleotide polymorphisms that lead to either pathogenesis or are neutral. For each polymorphism of the two sets, using existing available tools, a wide range of structural, functional, sequential and evolutionary features were calculated. For those features for which there was no available tool, the suitable program (code) was developed in order to compute them. In the second step a new embedded hybrid classification method called EnsembleGASVR is designed and implemented. The method uses an ensemble methodology, based on hybrid combination of Genetic Algorithms and nu-Support Vector Regression (nu-SVR) models. An Adaptive Genetic Algorithm is used to determine the optimal subset of features and the optimal values of the parameters of classifiers. We propose the nu-SVR classifier, since it exhibits high performance, good generalization ability, it is not trapped in local optima and achieves a balance between accuracy and complexity of the model. In order to overcome the problem of missing values and class imbalance, we extended the above algorithm to function as a collective ensemble-technique, combining eight individual classification models. In overall, the method achieves 87.45% accuracy, 71.78% sensitivity and 93.16% specificity. These priliminary results are very promising and shows that EnsembleGASVR methodology significantly outperforms other well-known classification methods for pathogenic mutations.
20

An approach for online learning in the presence of concept changes / Une approche pour l'apprentissage en-ligne en présence de changements de concept.

Jaber, Ghazal 18 October 2013 (has links)
De nombreuses applications de flux de données ont vu le jour au cours des dernières années. Lorsque l'environnement évolue, il est nécessaire de s'appuyer sur un apprentissage en ligne pouvant s'adapter aux conditions changeantes, alias dérives de concept. L'adaptation aux dérives de concept implique d'oublier une partie ou la totalité des connaissances acquises lorsque le concept change, tout en accumulant des connaissances sur le concept sous-jacent supposé stationnaire. Ce compromis est appelé le dilemme stabilité-plasticité.Les méthodes d'ensemble ont été parmi les approches les plus réussies. Cependant, la gestion de l'ensemble qui détermine les informations à oublier n'a pas été complètement étudiée jusqu'ici. Notre travail montre l'importance de la stratégie de l'oubli en comparant plusieurs approches. Les résultats ainsi obtenus nous amènent à proposer une nouvelle méthode d'ensemble avec une stratégie d'oubli conçue pour s'adapter aux dérives de concept. Des évaluations empiriques montrent que notre méthode se compare favorablement aux systèmes adaptatifs de l'état de l'art.Les majorité des anciens travaux réalisés se sont focalisés sur la détection des changements de concept, ainsi que les méthodes permettant d'adapter le système d'apprentissage aux changements. Dans ce travail, nous allons plus loin en introduisant un mécanisme d'anticipation capable de détecter des états pertinents de l'environnement, de reconnaître les contextes récurrents et d'anticiper les changements de concept susceptibles.Par conséquent, la méthode que nous proposons traite à la fois le défi d'optimiser le dilemme stabilité-plasticité, l'anticipation et la reconnaissance des futurs concepts. Ceci est accompli grâce à une méthode d'ensemble qui contrôle un comité d'apprenants. D'une part, la gestion de l'ensemble permet de s'adapter naturellement à la dynamique des changements de concept avec peu de paramètres à régler. D'autre part, un mécanisme d'apprentissage surveillant les changements dans l'ensemble fournit des moyens pour anticiper la modification sous-jacente du contexte. / Learning from data streams is emerging as an important application area. When the environment changes, it is necessary to rely on on-line learning with the capability to adapt to changing conditions a.k.a. concept drifts. Adapting to concept drifts entails forgetting some or all of the old acquired knowledge when the concept changes while accumulating knowledge regarding the supposedly stationary underlying concept. This tradeoff is called the stability-plasticity dilemma. Ensemble methods have been among the most successful approaches. However, the management of the ensemble which ultimately controls how past data is forgotten has not been thoroughly investigated so far. Our work shows the importance of the forgetting strategy by comparing several approaches. The results thus obtained lead us to propose a new ensemble method with an enhanced forgetting strategy to adapt to concept drifts. Experimental comparisons show that our method compares favorably with the well-known state-of-the-art systems. The majority of previous works focused only on means to detect changes and to adapt to them. In our work, we go one step further by introducing a meta-learning mechanism that is able to detect relevant states of the environment, to recognize recurring contexts and to anticipate likely concepts changes. Hence, the method we suggest, deals with both the challenge of optimizing the stability-plasticity dilemma and with the anticipation and recognition of incoming concepts. This is accomplished through an ensemble method that controls a ensemble of incremental learners. The management of the ensemble of learners enables one to naturally adapt to the dynamics of the concept changes with very few parameters to set, while a learning mechanism managing the changes in the ensemble provides means for the anticipation of, and the quick adaptation to, the underlying modification of the context.

Page generated in 0.0739 seconds