• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 63
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 88
  • 55
  • 52
  • 38
  • 34
  • 19
  • 15
  • 14
  • 13
  • 13
  • 12
  • 11
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Predicting Customer Satisfaction in the Context of Last-Mile Delivery using Supervised and Automatic Machine Learning

Höggren, Carl January 2022 (has links)
The prevalence of online shopping has steadily risen in the last few years. In response to these changes, last-mile delivery services have emerged that enable goods to reach customers within a shorter timeframe compared to traditional logistics providers. However, with decreased lead times follows greater exposure to risks that directly influence customer satisfaction. More specifically, this report aims to investigate the extent to which Supervised and Automatic Machine Learning can be leveraged to extract those features that have the highest explanatory power dictating customer ratings. The implementation suggests that Random Forest Classifier outperforms both Multi-Layer Perceptron and Support Vector Machine in predicting customer ratings on a highly imbalanced version of the dataset, while AutoML soars when the dataset is subject to undersampling. Using Permutation Feature Importance and Shapley Additive Explanations, it was further concluded that whether the delivery is on time, whether the delivery is executed within the stated time window, and whether the delivery is executed during the morning, afternoon, or evening, are paramount drivers of customer ratings. / Förekomsten av online-shopping har kraftigt ökat de senaste åren. I kölvattnet av dessa förändringar har flertalet sista-milen företag etablerats som möjliggör för paket att nå kunder inom en kortare tidsperiod jämfört med traditionella logistikföretag. Däremot, med minskade ledtider följer större exponering mot risker som direkt påverkar kundernas upplevelse av sista-milen tjänsten. Givet detta syftar denna rapport till att undersöka huruvida övervakad och automtisk maskininlärning kan användas för att extrahera de parametrar som har störst påverkan på kundnöjdhet. Implementationen visar att slumpmässiga beslutsträd överträffar både neurala nätverk och stödvektorsmaskiner i syfte att förutspå kundnöjdhet på en obalanserad version av träningsdatan, medan automatisk maskininlärning överträffar övriga modeller på en balanserad version. Genom användning av metoderna Permutation Feature Importance och Shapley Additive Explanations, framgick att huruvida paketet är försenad, huruvida paketet levereras inom det angivet tidsfönster, och huruvida paketet anländer under morgonen, eftermiddagen, eller kvällen, har störst påverkan på kundnöjdhet.
72

Neonatal Sepsis Detection Using Decision Tree Ensemble Methods: Random Forest and XGBoost

Al-Bardaji, Marwan, Danho, Nahir January 2022 (has links)
Neonatal sepsis is a potentially fatal medical conditiondue to an infection and is attributed to about 200 000annual deaths globally. With healthcare systems that are facingconstant challenges, there exists a potential for introducingmachine learning models as a diagnostic tool that can beautomatized within existing workflows and would not entail morework for healthcare personnel. The Herlenius Research Teamat Karolinska Institutet has collected neonatal sepsis data thathas been used for the development of many machine learningmodels across several papers. However, none have tried to studydecision tree ensemble methods. In this paper, random forestand XGBoost models are developed and evaluated in order toassess their feasibility for clinical practice. The data contained24 features of vital parameters that are easily collected througha patient monitoring system. The validation and evaluationprocedure needed special consideration due to the data beinggrouped based on patient level and being imbalanced. Theproposed methods developed in this paper have the potentialto be generalized to other similar applications. Finally, usingthe measure receiver-operating-characteristic area-under-curve(ROC AUC), both models achieved around ROC AUC= 0.84.Such results suggest that the random forest and XGBoost modelsare potentially feasible for clinical practice. Another gainedinsight was that both models seemed to perform better withsimpler models, suggesting that future work could create a moreexplainable model. / Nenatal sepsis är ett potentiellt dödligt‌‌‌ medicinskt tillstånd till följd av en infektion och uppges globalt orsaka 200 000 dödsfall årligen. Med sjukvårdssystem som konstant utsätts för utmaningar existerar det en potential för maskininlärningsmodeller som diagnostiska verktyg automatiserade inom existerande arbetsflöden utan att innebära mer arbete för sjukvårdsanställda. Herelenius forskarteam på Karolinska Institet har samlat ihop neonatal sepsis data som har använts för att utveckla många maskininlärningsmodeller över flera studier. Emellertid har ingen prövat att undersöka beslutsträds ensemble metoder. Syftet med denna studie är att utveckla och utvärdera random forest och XGBoost modeller för att bedöma deras möjligheter i klinisk praxis. Datan innehör 24 attribut av vitalparameterar som enkelt samlas in genom patientövervakningssystem. Förfarandet för validering och utvärdering krävde särskild hänsyn med tanke på att datan var grupperad på patientnivå och var obalanserad. Den föreslagna metoden har potential att generaliseras till andra liknande tillämpningar. Slutligen, genom att använda receiveroperating-characteristic area-under-curve (ROC AUC) måttet kunde vi uppvisa att båda modellerna presterade med ett resultat på ROC AUC= 0.84. Sådana resultat föreslår att både random forest och XGBoost modellerna kan potentiellt användas i klinisk praxis. En annan insikt var att båda modellerna verkade prestera bättre med enklare modeller vilket föreslår att ete skulle kunna vara att skapa en mer förklarlig skininlärningsmodell. / Kandidatexjobb i elektroteknik 2022, KTH, Stockholm
73

Neonatal Sepsis Detection With Random Forest Classification for Heavily Imbalanced Data

Osman Abubaker, Ayman January 2022 (has links)
Neonatal sepsis is associated with most cases ofmortality in the neonatal intensive care unit. Major challengesin detecting sepsis using suitable biomarkers has lead people tolook for alternative approaches in the form of Machine Learningtechniques. In this project, Random Forest classification wasperformed on a sepsis data set provided by Karolinska Hospital.We particularly focused on tackling class imbalance in the datausing sampling and cost-sensitive techniques. We compare theclassification performances of Random Forests in six differentsetups; four using oversampling and undersampling techniques;one using cost-sensitive learning and one basic Random Forest.The performance with the oversampling techniques were betterand could identify more sepsis patients than the other setups.The overall performances were also good, making the methodspotentially useful in practice. / Neonatal sepsis är orsaken till majoriteten av mortaliteten i neonatal intensivvården. Svårigheten i att detektera sepsis med hjälp av biomarkörer har lett många att leta efter alternativa metoder. Maskininlärningstekniker är en sådan alternativ metod som har i senaste tider ökat i användning inom vård och andra sektorer. I detta project användes Random Forest klassifikations algoritmen på en sepsis datamängd given av Karolinska Sjukhuset. Vi fokuserade på att hantera klassimbalansen i datan genom att använda olika provtagningsoch kostnadskänsliga metoder. Vi jämförde klassificeringsprestanda för Random Forest med sex olika inställningar; fyra av de använde provtagingsmetoderna; en av de använde en kostnadskänslig metod och en var en vanlig Random Forest. Det visade sig att modellens prestanda ökade som mest med översamplings metoderna. Den generella klassificeringsprestandan var också bra, vilket gör Random Forests tillsammans med ingsmetoderna potentiellt användbar i praktiken. / Kandidatexjobb i elektroteknik 2022, KTH, Stockholm
74

Predictive Maintenance of Construction Equipment using Log Data : A Data- centric Approach

Kotriwala, Bazil Muzaffar January 2021 (has links)
Construction equipment manufacturers want to reduce the downtime of their equipment by moving from the typical reactive maintenance to a predictive maintenance approach. They would like to define a method to predict the failure of the construction equipment ahead of time by leveraging the real- world data that is being logged by their vehicles. This data is logged as general event data and specific sensor data belonging to different components of the vehicle. For the scope of this study, the focus is on articulated hauler vehicles with engine as the specific component under observation. In the study, extensive time and resources are spent on preparing both the real- world data sources and coming up with methods such that both data sources are ready for predictive maintenance and can also be merged together. The prepared data is used to build respective remaining useful life machine learning models which classify whether there will be a failure in the next x days. These models are built using data from two different approaches namely, lead data shift and resampling approach respectively. Three different experiments are carried out for both of these approaches using three different combinations of data namely event log only, engine sensor log only, event and sensor log combined. All these experiments have an increasing look ahead window size of how far into the future we would like to predict the failure. The results of these experiments are evaluated in relation to which is the best approach, data combination, and window size to foresee engine failures. The model performance is primarily distinguished by the F- Score and Area under Precision- Recall Curve. / Tillverkare av anläggningsutrustning vill minska stilleståndstiden för sin utrustning genom att övergå från det typiska reaktiva underhållet till ett förebyggande underhåll. De vill definiera en metod för att förutse fel på byggutrustningen i förväg genom att utnyttja de verkliga data som loggas av fordonen. Dessa data loggas som allmänna händelsedata och specifika sensordata som tillhör olika komponenter i fordonet. I den här studien ligger fokus på ledade dragfordon med motorn som den specifika komponent som observeras. I studien läggs mycket tid och resurser på att förbereda båda datakällorna i den verkliga världen och att ta fram metoder så att båda datakällorna är redo för förebyggande underhåll och kan slås samman. De förberedda uppgifterna används för att bygga maskininlärnings modeller för återstående livslängd som klassificerar om det kommer att ske ett fel inom de närmaste x dagarna. Modellerna byggs upp med hjälp av data från två olika metoder, nämligen lead data shift och resampling approach. Tre olika experiment utförs för båda dessa metoder med tre olika kombinationer av data, nämligen endast händelselogg, endast motorsensorlogg och kombinerad händelselogg och sensorlogg. Alla dessa experiment har en ökande fönsterstorlek för hur långt in i framtiden vi vill förutsäga felet. Resultaten av dessa experiment utvärderas med avseende på vilket tillvägagångssätt, vilken datakombination och vilken fönsterstorlek som är bäst för att förutse motorhaverier. Modellens prestanda bedöms i första hand med hjälp av F- poäng och arean under Precision- Recall- kurvan.
75

Cost-Sensitive Learning-based Methods for Imbalanced Classification Problems with Applications

Razzaghi, Talayeh 01 January 2014 (has links)
Analysis and predictive modeling of massive datasets is an extremely significant problem that arises in many practical applications. The task of predictive modeling becomes even more challenging when data are imperfect or uncertain. The real data are frequently affected by outliers, uncertain labels, and uneven distribution of classes (imbalanced data). Such uncertainties create bias and make predictive modeling an even more difficult task. In the present work, we introduce a cost-sensitive learning method (CSL) to deal with the classification of imperfect data. Typically, most traditional approaches for classification demonstrate poor performance in an environment with imperfect data. We propose the use of CSL with Support Vector Machine, which is a well-known data mining algorithm. The results reveal that the proposed algorithm produces more accurate classifiers and is more robust with respect to imperfect data. Furthermore, we explore the best performance measures to tackle imperfect data along with addressing real problems in quality control and business analytics.
76

Generative Approach For Multivariate Signals

Sawant, Vinay, Bhende, Renu January 2024 (has links)
In this thesis, we explored the generative adversarial network called uTSGAN to generate patterns from multivariate CAN bus time series dataset. Since the given data is unlabelled, unprocessed and highly imbalanced containing large amount of missing values, we have to define and discard a few timestamps and limit the focus of the study to the reduced subset involving patterns of the 10-second window size, which are categorised and clustered into majority and minority classes. To generate such an imbalanced set, we have used image based time series GAN called uTSGAN which transforms a time sequence into a spectrogram image and back to a time sequence within the same GAN framework. For the comparison, we also prepared a resampled (balanced) dataset from the imbalanced set to use in alternative experiments. This comparison evaluates the results of the conventional resampling approach against the baseline as well as our novel implementations. We propose two new methods using "cluster density based" and "sample loss based" techniques. Throughout the experimentation, the "cluster density based" GANs consistently achieved better results on some common and uncommon evaluations for multivariate and highly imbalanced sample sets. Among some regular evaluation methods, classification metrics such as balanced accuracy and precision provide a better understanding of experimentation results. The TRTS balanced accuracy and precision from "cluster density based" GAN achieves over 82% and 90% with an improvement of 20-30% and 14-18% respectively from that of baseline; the TSTR balanced accuracy of "cluster density based" increased by 10.6% from that of baseline and it show slightly better precision with respect to that of the baseline when compared on generated results from univariate experiments. Secondly, the alternative "resampling based" implementations show similar values to that of the baseline in TRTS and TSTR classifications. Simultaneously, More distinguished results are seen using a quantitative metric called Earth Mover’s Distance(EMD). We have used this distance measure to calculate the overall mean EMD and clusterwise mean EMD between real samples and fake (i.e. generated) samples. During their evaluations, "cluster density based" experiments showed significantly better results for not only majority but also minority clusters as compared to the results of baseline and "resampling based" experiments. At the end, we have opened a discussion on how one can utilize our findings in MAR problem aswell as improve the results by taking some precautionary measures.
77

Apprentissage supervisé de données déséquilibrées par forêt aléatoire / Supervised learning of imbalanced datasets using random forest

Thomas, Julien 12 February 2009 (has links)
La problématique des jeux de données déséquilibrées en apprentissage supervisé est apparue relativement récemment, dès lors que le data mining est devenu une technologie amplement utilisée dans l'industrie. Le but de nos travaux est d'adapter différents éléments de l'apprentissage supervisé à cette problématique. Nous cherchons également à répondre aux exigences spécifiques de performances souvent liées aux problèmes de données déséquilibrées. Ce besoin se retrouve dans notre application principale, la mise au point d'un logiciel d'aide à la détection des cancers du sein.Pour cela, nous proposons de nouvelles méthodes modifiant trois différentes étapes d'un processus d'apprentissage. Tout d'abord au niveau de l'échantillonnage, nous proposons lors de l'utilisation d'un bagging, de remplacer le bootstrap classique par un échantillonnage dirigé. Nos techniques FUNSS et LARSS utilisent des propriétés de voisinage pour la sélection des individus. Ensuite au niveau de l'espace de représentation, notre contribution consiste en une méthode de construction de variables adaptées aux jeux de données déséquilibrées. Cette méthode, l'algorithme FuFeFa, est basée sur la découverte de règles d'association prédictives. Enfin, lors de l'étape d'agrégation des classifieurs de base d'un bagging, nous proposons d'optimiser le vote à la majorité en le pondérant. Pour ce faire nous avons mis en place une nouvelle mesure quantitative d'évaluation des performances d'un modèle, PRAGMA, qui permet la prise en considération de besoins spécifiques de l'utilisateur vis-à-vis des taux de rappel et de précision de chaque classe. / The problem of imbalanced datasets in supervised learning has emerged relatively recently, since the data mining has become a technology widely used in industry. The assisted medical diagnosis, the detection of fraud, abnormal phenomena, or specific elements on satellite imagery, are examples of industrial applications based on supervised learning of imbalanced datasets. The goal of our work is to bring supervised learning process on this issue. We also try to give an answer about the specific requirements of performance often related to the problem of imbalanced datasets, such as a high recall rate for the minority class. This need is reflected in our main application, the development of software to help radiologist in the detection of breast cancer. For this, we propose new methods of amending three different stages of a learning process. First in the sampling stage, we propose in the case of a bagging, to replaced classic bootstrap sampling by a guided sampling. Our techniques, FUNSS and LARSS use neighbourhood properties for the selection of objects. Secondly, for the representation space, our contribution is a method of variables construction adapted to imbalanced datasets. This method, the algorithm FuFeFa, is based on the discovery of predictive association rules. Finally, at the stage of aggregation of base classifiers of a bagging, we propose to optimize the majority vote in using weightings. For this, we have introduced a new quantitative measure of model assessment, PRAGMA, which allows taking into account user specific needs about recall and precision rates of each class.
78

A methodology for improving computed individual regressions predictions. / Uma metodologia para melhorar predições individuais de regressões.

Matsumoto, Élia Yathie 23 October 2015 (has links)
This research proposes a methodology to improve computed individual prediction values provided by an existing regression model without having to change either its parameters or its architecture. In other words, we are interested in achieving more accurate results by adjusting the calculated regression prediction values, without modifying or rebuilding the original regression model. Our proposition is to adjust the regression prediction values using individual reliability estimates that indicate if a single regression prediction is likely to produce an error considered critical by the user of the regression. The proposed method was tested in three sets of experiments using three different types of data. The first set of experiments worked with synthetically produced data, the second with cross sectional data from the public data source UCI Machine Learning Repository and the third with time series data from ISO-NE (Independent System Operator in New England). The experiments with synthetic data were performed to verify how the method behaves in controlled situations. In this case, the outcomes of the experiments produced superior results with respect to predictions improvement for artificially produced cleaner datasets with progressive worsening with the addition of increased random elements. The experiments with real data extracted from UCI and ISO-NE were done to investigate the applicability of the methodology in the real world. The proposed method was able to improve regression prediction values by about 95% of the experiments with real data. / Esta pesquisa propõe uma metodologia para melhorar previsões calculadas por um modelo de regressão, sem a necessidade de modificar seus parâmetros ou sua arquitetura. Em outras palavras, o objetivo é obter melhores resultados por meio de ajustes nos valores computados pela regressão, sem alterar ou reconstruir o modelo de previsão original. A proposta é ajustar os valores previstos pela regressão por meio do uso de estimadores de confiabilidade individuais capazes de indicar se um determinado valor estimado é propenso a produzir um erro considerado crítico pelo usuário da regressão. O método proposto foi testado em três conjuntos de experimentos utilizando três tipos de dados diferentes. O primeiro conjunto de experimentos trabalhou com dados produzidos artificialmente, o segundo, com dados transversais extraídos no repositório público de dados UCI Machine Learning Repository, e o terceiro, com dados do tipo séries de tempos extraídos do ISO-NE (Independent System Operator in New England). Os experimentos com dados artificiais foram executados para verificar o comportamento do método em situações controladas. Nesse caso, os experimentos alcançaram melhores resultados para dados limpos artificialmente produzidos e evidenciaram progressiva piora com a adição de elementos aleatórios. Os experimentos com dados reais extraído das bases de dados UCI e ISO-NE foram realizados para investigar a aplicabilidade da metodologia no mundo real. O método proposto foi capaz de melhorar os valores previstos por regressões em cerca de 95% dos experimentos realizados com dados reais.
79

A methodology for improving computed individual regressions predictions. / Uma metodologia para melhorar predições individuais de regressões.

Élia Yathie Matsumoto 23 October 2015 (has links)
This research proposes a methodology to improve computed individual prediction values provided by an existing regression model without having to change either its parameters or its architecture. In other words, we are interested in achieving more accurate results by adjusting the calculated regression prediction values, without modifying or rebuilding the original regression model. Our proposition is to adjust the regression prediction values using individual reliability estimates that indicate if a single regression prediction is likely to produce an error considered critical by the user of the regression. The proposed method was tested in three sets of experiments using three different types of data. The first set of experiments worked with synthetically produced data, the second with cross sectional data from the public data source UCI Machine Learning Repository and the third with time series data from ISO-NE (Independent System Operator in New England). The experiments with synthetic data were performed to verify how the method behaves in controlled situations. In this case, the outcomes of the experiments produced superior results with respect to predictions improvement for artificially produced cleaner datasets with progressive worsening with the addition of increased random elements. The experiments with real data extracted from UCI and ISO-NE were done to investigate the applicability of the methodology in the real world. The proposed method was able to improve regression prediction values by about 95% of the experiments with real data. / Esta pesquisa propõe uma metodologia para melhorar previsões calculadas por um modelo de regressão, sem a necessidade de modificar seus parâmetros ou sua arquitetura. Em outras palavras, o objetivo é obter melhores resultados por meio de ajustes nos valores computados pela regressão, sem alterar ou reconstruir o modelo de previsão original. A proposta é ajustar os valores previstos pela regressão por meio do uso de estimadores de confiabilidade individuais capazes de indicar se um determinado valor estimado é propenso a produzir um erro considerado crítico pelo usuário da regressão. O método proposto foi testado em três conjuntos de experimentos utilizando três tipos de dados diferentes. O primeiro conjunto de experimentos trabalhou com dados produzidos artificialmente, o segundo, com dados transversais extraídos no repositório público de dados UCI Machine Learning Repository, e o terceiro, com dados do tipo séries de tempos extraídos do ISO-NE (Independent System Operator in New England). Os experimentos com dados artificiais foram executados para verificar o comportamento do método em situações controladas. Nesse caso, os experimentos alcançaram melhores resultados para dados limpos artificialmente produzidos e evidenciaram progressiva piora com a adição de elementos aleatórios. Os experimentos com dados reais extraído das bases de dados UCI e ISO-NE foram realizados para investigar a aplicabilidade da metodologia no mundo real. O método proposto foi capaz de melhorar os valores previstos por regressões em cerca de 95% dos experimentos realizados com dados reais.
80

Μηχανική μάθηση σε ανομοιογενή δεδομένα / Machine learning in imbalanced data sets

Λυπιτάκη, Αναστασία Δήμητρα Δανάη 07 July 2015 (has links)
Οι αλγόριθμοι μηχανικής μάθησης είναι επιθυμητό να είναι σε θέση να γενικεύσουν για οποιασδήποτε κλάση με ίδια ακρίβεια. Δηλαδή σε ένα πρόβλημα δύο κλάσεων - θετικών και αρνητικών περιπτώσεων - ο αλγόριθμος να προβλέπει με την ίδια ακρίβεια και τα θετικά και τα αρνητικά παραδείγματα. Αυτό είναι φυσικά η ιδανική κατάσταση. Σε πολλές εφαρμογές οι αλγόριθμοι καλούνται να μάθουν από ένα σύνολο στοιχείων, το οποίο περιέχει πολύ περισσότερα παραδείγματα από τη μια κλάση σε σχέση με την άλλη. Εν γένει, οι επαγωγικοί αλγόριθμοι είναι σχεδιασμένοι να ελαχιστοποιούν τα σφάλματα. Ως συνέπεια οι κλάσεις που περιέχουν λίγες περιπτώσεις μπορούν να αγνοηθούν κατά ένα μεγάλο μέρος επειδή το κόστος λανθασμένης ταξινόμησης της υπερ-αντιπροσωπευόμενης κλάσης ξεπερνά το κόστος λανθασμένης ταξινόμησης της μικρότερη κλάση. Το πρόβλημα των ανομοιογενών συνόλων δεδομένων εμφανίζεται και σε πολλές πραγματικές εφαρμογές όπως στην ιατρική διάγνωση, στη ρομποτική, στις διαδικασίες βιομηχανικής παραγωγής, στην ανίχνευση λαθών δικτύων επικοινωνίας, στην αυτοματοποιημένη δοκιμή του ηλεκτρονικού εξοπλισμού, και σε πολλές άλλες περιοχές. Η παρούσα διπλωματική εργασία με τίτλο ‘Μηχανική Μάθηση με Ανομοιογενή Δεδομένα’ (Machine Learning with Imbalanced Data) αναφέρεται στην επίλυση του προβλήματος αποδοτικής χρήσης αλγορίθμων μηχανικής μάθησης σε ανομοιογενή/ανισοκατανεμημένα δεδομένα. Η διπλωματική περιλαμβάνει μία γενική περιγραφή των βασικών αλγορίθμων μηχανικής μάθησης και των μεθόδων αντιμετώπισης του προβλήματος ανομοιογενών δεδομένων. Παρουσιάζεται πλήθος αλγοριθμικών τεχνικών διαχείρισης ανομοιογενών δεδομένων, όπως οι αλγόριθμοι AdaCost, Cost Senistive Boosting, Metacost και άλλοι. Παρατίθενται οι μετρικές αξιολόγησης των μεθόδων Μηχανικής Μάθησης σε ανομοιογενή δεδομένα, όπως οι καμπύλες διαχείρισης λειτουργικών χαρακτηριστικών (ROC curves), καμπύλες ακρίβειας (PR curves) και καμπύλες κόστους. Στο τελευταίο μέρος της εργασίας προτείνεται ένας υβριδικός αλγόριθμος που συνδυάζει τις τεχνικές OverBagging και Rotation Forest. Συγκρίνεται ο προτεινόμενος αλγόριθμος σε ένα σύνολο ανομοιογενών δεδομένων με άλλους αλγόριθμους και παρουσιάζονται τα αντίστοιχα πειραματικά αποτελέσματα που δείχνουν την καλύτερη απόδοση του προτεινόμενου αλγόριθμου. Τελικά διατυπώνονται τα συμπεράσματα της εργασίας και δίνονται χρήσιμες ερευνητικές κατευθύνσεις. / Machine Learning (ML) algorithms can generalize for every class with the same accuracy. In a problem of two classes, positive (true) and negative (false) cases-the algorithm can predict with the same accuracy the positive and negative examples that is the ideal case. In many applications ML algorithms are used in order to learn from data sets that include more examples from the one class in relationship with another class. In general inductive algorithms are designed in such a way that they can minimize the occurred errors. As a conclusion the classes that contain some cases can be ignored in a large percentage since the cost of the false classification of the super-represented class is greater than the cost of false classification of lower class. The problem of imbalanced data sets is occurred in many ‘real’ applications, such as medical diagnosis, robotics, industrial development processes, communication networks error detection, automated testing of electronic equipment and in other related areas. This dissertation entitled ‘Machine Learning with Imbalanced Data’ is referred to the solution of the problem of efficient use of ML algorithms with imbalanced data sets. The thesis includes a general description of basic ML algorithms and related methods for solving imbalanced data sets. A number of algorithmic techniques for handling imbalanced data sets is presented, such as Adacost, Cost Sensitive Boosting, Metacost and other algorithms. The evaluation metrics of ML methods for imbalanced datasets are presented, including the ROC (Receiver Operating Characteristic) curves, the PR (Precision and Recall) curves and cost curves. A new hybrid ML algorithm combining the OverBagging and Rotation Forest algorithms is introduced and the proposed algorithmic procedure is compared with other related algorithms by using the WEKA operational environment. Experimental results demonstrate the performance superiority of the proposed algorithm. Finally, the conclusions of this research work are presented and several future research directions are given.

Page generated in 0.0761 seconds