• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 339
  • 26
  • 21
  • 13
  • 8
  • 5
  • 5
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 508
  • 508
  • 273
  • 271
  • 147
  • 135
  • 129
  • 128
  • 113
  • 92
  • 88
  • 77
  • 76
  • 74
  • 59
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Cartographie de l'occupation des sols à partir de séries temporelles d'images satellitaires à hautes résolutions : identification et traitement des données mal étiquetées / Land cover mapping by using satellite image time series at high resolutions : identification and processing of mislabeled data

Pelletier, Charlotte 11 December 2017 (has links)
L'étude des surfaces continentales est devenue ces dernières années un enjeu majeur à l'échelle mondiale pour la gestion et le suivi des territoires, notamment en matière de consommation des terres agricoles et d'étalement urbain. Dans ce contexte, les cartes d'occupation du sol caractérisant la couverture biophysique des terres émergées jouent un rôle essentiel pour la cartographie des surfaces continentales. La production de ces cartes sur de grandes étendues s'appuie sur des données satellitaires qui permettent de photographier les surfaces continentales fréquemment et à faible coût. Le lancement de nouvelles constellations satellitaires - Landsat-8 et Sentinel-2 - permet depuis quelques années l'acquisition de séries temporelles à hautes résolutions. Ces dernières sont utilisées dans des processus de classification supervisée afin de produire les cartes d'occupation du sol. L'arrivée de ces nouvelles données ouvre de nouvelles perspectives, mais questionne sur le choix des algorithmes de classification et des données à fournir en entrée du système de classification. Outre les données satellitaires, les algorithmes de classification supervisée utilisent des échantillons d'apprentissage pour définir leur règle de décision. Dans notre cas, ces échantillons sont étiquetés, \ie{} la classe associée à une occupation des sols est connue. Ainsi, la qualité de la carte d'occupation des sols est directement liée à la qualité des étiquettes des échantillons d'apprentissage. Or, la classification sur de grandes étendues nécessite un grand nombre d'échantillons, qui caractérise la diversité des paysages. Cependant, la collecte de données de référence est une tâche longue et fastidieuse. Ainsi, les échantillons d'apprentissage sont bien souvent extraits d'anciennes bases de données pour obtenir un nombre conséquent d'échantillons sur l'ensemble de la surface à cartographier. Cependant, l'utilisation de ces anciennes données pour classer des images satellitaires plus récentes conduit à la présence de nombreuses données mal étiquetées parmi les échantillons d'apprentissage. Malheureusement, l'utilisation de ces échantillons mal étiquetés dans le processus de classification peut engendrer des erreurs de classification, et donc une détérioration de la qualité de la carte produite. L'objectif général de la thèse vise à améliorer la classification des nouvelles séries temporelles d'images satellitaires à hautes résolutions. Le premier objectif consiste à déterminer la stabilité et la robustesse des méthodes de classification sur de grandes étendues. Plus particulièrement, les travaux portent sur l'analyse d'algorithmes de classification et la sensibilité de ces algorithmes vis-à-vis de leurs paramètres et des données en entrée du système de classification. De plus, la robustesse de ces algorithmes à la présence des données imparfaites est étudiée. Le second objectif s'intéresse aux erreurs présentes dans les données d'apprentissage, connues sous le nom de données mal étiquetées. Dans un premier temps, des méthodes de détection de données mal étiquetées sont proposées et étudiées. Dans un second temps, un cadre méthodologique est proposé afin de prendre en compte les données mal étiquetées dans le processus de classification. L'objectif est de réduire l'influence des données mal étiquetées sur les performances de l'algorithme de classification, et donc d'améliorer la carte d'occupation des sols produite. / Land surface monitoring is a key challenge for diverse applications such as environment, forestry, hydrology and geology. Such monitoring is particularly helpful for the management of territories and the prediction of climate trends. For this purpose, mapping approaches that employ satellite-based Earth Observations at different spatial and temporal scales are used to obtain the land surface characteristics. More precisely, supervised classification algorithms that exploit satellite data present many advantages compared to other mapping methods. In addition, the recent launches of new satellite constellations - Landsat-8 and Sentinel-2 - enable the acquisition of satellite image time series at high spatial and spectral resolutions, that are of great interest to describe vegetation land cover. These satellite data open new perspectives, but also interrogate the choice of classification algorithms and the choice of input data. In addition, learning classification algorithms over large areas require a substantial number of instances per land cover class describing landscape variability. Accordingly, training data can be extracted from existing maps or specific existing databases, such as crop parcel farmer's declaration or government databases. When using these databases, the main drawbacks are the lack of accuracy and update problems due to a long production time. Unfortunately, the use of these imperfect training data lead to the presence of mislabeled training instance that may impact the classification performance, and so the quality of the produced land cover map. Taking into account the above challenges, this Ph.D. work aims at improving the classification of new satellite image time series at high resolutions. The work has been divided into two main parts. The first Ph.D. goal consists in studying different classification systems by evaluating two classification algorithms with several input datasets. In addition, the stability and the robustness of the classification methods are discussed. The second goal deals with the errors contained in the training data. Firstly, methods for the detection of mislabeled data are proposed and analyzed. Secondly, a filtering method is proposed to take into account the mislabeled data in the classification framework. The objective is to reduce the influence of mislabeled data on the classification performance, and thus to improve the produced land cover map.
312

A Comparison of Machine Learning Techniques for Facial Expression Recognition

Deaney, Mogammat Waleed January 2018 (has links)
Magister Scientiae - MSc (Computer Science) / A machine translation system that can convert South African Sign Language (SASL) video to audio or text and vice versa would be bene cial to people who use SASL to communicate. Five fundamental parameters are associated with sign language gestures, these are: hand location; hand orientation; hand shape; hand movement and facial expressions. The aim of this research is to recognise facial expressions and to compare both feature descriptors and machine learning techniques. This research used the Design Science Research (DSR) methodology. A DSR artefact was built which consisted of two phases. The rst phase compared local binary patterns (LBP), compound local binary patterns (CLBP) and histogram of oriented gradients (HOG) using support vector machines (SVM). The second phase compared the SVM to arti cial neural networks (ANN) and random forests (RF) using the most promising feature descriptor|HOG|from the rst phase. The performance was evaluated in terms of accuracy, robustness to classes, robustness to subjects and ability to generalise on both the Binghamton University 3D facial expression (BU-3DFE) and Cohn Kanade (CK) datasets. The evaluation rst phase showed HOG to be the best feature descriptor followed by CLBP and LBP. The second showed ANN to be the best choice of machine learning technique closely followed by the SVM and RF.
313

A walk through randomness for face analysis in unconstrained environments / Etude des méthodes aléatoires pour l'analyse de visage en environnement non contraint

Dapogny, Arnaud 01 December 2016 (has links)
L'analyse automatique des expressions faciales est une étape clef pour le développement d'interfaces intelligentes ou l'analyse de comportements. Toutefois, celle-ci est rendue difficile par un grand nombre de facteurs, pouvant être d'ordre morphologiques, liés à l'orientation du visage ou à la présence d'occultations. Nous proposons des adaptations des Random Forest permettant d' adresser ces problématiques:- Le développement des Pairwise Conditional Random Forest, consistant en l'apprentissage de modèles à partir de paires d'images expressives. Les arbres sont de plus conditionnés par rapport à l'expression de la première image afin de réduire la variabilité des transitions. De plus, il est possible de conditionner les arbres en rapport avec une estimation de la pose du visage afin de permettre la reconnaissance quel que soit le point de vue considéré.- L'utilisation de réseaux de neurones auto-associatifs pour modéliser localement l'apparence du visage. Ces réseaux fournissent une mesure de confiance qui peut être utilisée dans le but de pondérer des Random Forests définies sur des sous-espaces locaux du visage. Ce faisant, il est possible de fournir une prédiction d'expression robuste aux occultations partielles du visage.- Des améliorations du récemment proposé algorithme des Neural Decision Forests, lesquelles consistent en une procédure d'apprentissage simplifiée, ainsi qu'en une évaluation "greedy" permettant une évaluation plus rapide, avec des applications liées à l'apprentissage en ligne de représentations profondes pour la reconnaissance des expressions, ainsi que l'alignement de points caractéristiques. / Automatic face analysis is a key to the development of intelligent human-computer interaction systems and behavior understanding. However, there exist a number of factors that makes face analysis a difficult problem. This include morphological differences between different persons, head pose variations as well as the possibility of partial occlusions. In this PhD, we propose a number of adaptations of the so-called Random Forest algorithm to specifically adress those problems. Mainly, those improvements consist in:– The development of a Pairwise Conditional Random Forest framework, that consists in training Random Forests upon pairs of expressive images. Pairwise trees are conditionned on the expression label of the first frame of a pair to reduce the ongoing expression transition variability. Additionnally, trees can be conditionned upon a head pose estimate to peform facial expression recognition from an arbitrary viewpoint.– The design of a hierarchical autoencoder network to model the local face texture patterns. The reconstruction error of this network provides a confidence measurement that can be used to weight Randomized decision trees trained on spatially-defined local subspace of the face. Thus, we can provide an expression prediction that is robust to partial occlusions.– Improvements over the very recent Neural Decision Forests framework, that include both a simplified training procedure as well as a new greedy evaluation procedure, that allows to dramatically improve the evaluation runtime, with applications for online learning and, deep learning convolutional neural network-based features for facial expression recognition as well as feature point alignement.
314

5G Positioning using Machine Learning

Malmström, Magnus January 2018 (has links)
Positioning is recognized as an important feature of fifth generation (\abbrFiveG) cellular networks due to the massive number of commercial use cases that would benefit from access to position information. Radio based positioning has always been a challenging task in urban canyons where buildings block and reflect the radio signal, causing multipath propagation and non-line-of-sight (NLOS) signal conditions. One approach to handle NLOS is to use data-driven methods such as machine learning algorithms on beam-based data, where a training data set with positioned measurements are used to train a model that transforms measurements to position estimates.  The work is based on position and radio measurement data from a 5G testbed. The transmission point (TP) in the testbed has an antenna that have beams in both horizontal and vertical layers. The measurements are the beam reference signal received power (BRSRP) from the beams and the direction of departure (DOD) from the set of beams with the highest received signal strength (RSS). For modelling of the relation between measurements and positions, two non-linear models has been considered, these are neural network and random forest models. These non-linear models will be referred to as machine learning algorithms.  The machine learning algorithms are able to position the user equipment (UE) in NLOS regions with a horizontal positioning error of less than 10 meters in 80 percent of the test cases. The results also show that it is essential to combine information from beams from the different vertical antenna layers to be able to perform positioning with high accuracy during NLOS conditions. Further, the tests show that the data must be separated into line-of-sight (LOS) and NLOS data before the training of the machine learning algorithms to achieve good positioning performance under both LOS and NLOS conditions. Therefore, a generalized likelihood ratio test (GLRT) to classify data originating from LOS or NLOS conditions, has been developed. The probability of detection of the algorithms is about 90\% when the probability of false alarm is only 5%.  To boost the position accuracy of from the machine learning algorithms, a Kalman filter have been developed with the output from the machine learning algorithms as input. Results show that this can improve the position accuracy in NLOS scenarios significantly. / Radiobasserad positionering av användarenheter är en viktig applikation i femte generationens (5G) radionätverk, som mycket tid och pengar läggs på för att utveckla och förbättra. Ett exempel på tillämpningsområde är positionering av nödsamtal, där ska användarenheten kunna positioneras med en noggrannhet på ett tiotal meter. Radio basserad positionering har alltid varit utmanande i stadsmiljöer där höga hus skymmer och reflekterar signalen mellan användarenheten och basstationen. En ide att positionera i dessa utmanande stadsmiljöer är att använda datadrivna modeller tränade av algoritmer baserat på positionerat testdata – så kallade maskininlärningsalgoritmer. I detta arbete har två icke-linjära modeller - neurala nätverk och random forest – bli implementerade och utvärderade för positionering av användarenheter där signalen från basstationen är skymd.% Dessa modeller refereras som maskininlärningsalgoritmer. Utvärderingen har gjorts på data insamlad av Ericsson från ett 5G-prototypnätverk lokaliserat i Kista, Stockholm. Antennen i den basstation som används har 48 lober vilka ligger i fem olika vertikala lager. Insignal och målvärdena till maskininlärningsalgoritmerna är signals styrkan för varje stråle (BRSRP), respektive givna GPS-positioner för användarenheten. Resultatet visar att med dessa maskininlärningsalgoritmer positioneras användarenheten med en osäkerhet mindre än tio meter i 80 procent av försöksfallen. För att kunna uppnå dessa resultat är viktigt att kunna detektera om signalen mellan användarenheten och basstationen är skymd eller ej. För att göra det har ett statistiskt test blivit implementerat. Detektionssannolikhet för testet är över 90 procent, samtidigt som sannolikhet att få falskt alarm endast är ett fåtal procent.\newline \newline%För att minska osäkerheten i positioneringen har undersökningar gjorts där utsignalen från maskininlärningsalgoritmerna filtreras med ett Kalman-filter. Resultat från dessa undersökningar visar att Kalman-filtret kan förbättra presitionen för positioneringen märkvärt.
315

Improving armed conflict prediction using machine learning : ViEWS+

Helle, Valeria, Negus, Andra-Stefania, Nyberg, Jakob January 2018 (has links)
Our project, ViEWS+, expands the software functionality of the Violence EarlyWarning System (ViEWS). ViEWS aims to predict the probabilities of armed conflicts in the next 36 months using machine learning. Governments and policy-makers may use conflict predictions to decide where to deliver aid and resources, potentially saving lives. The predictions use conflict data gathered by ViEWS, which includes variables like past conflicts, child mortality and urban density. The large number of variables raises the need for a selection tool to remove those that are irrelevant for conflict prediction. Before our work, the stakeholders used their experience and some guesswork to pick the variables, and the predictive function with its parameters. Our goals were to improve the efficiency, in terms of speed, and correctness of the ViEWS predictions. Three steps were taken. Firstly, we made an automatic variable selection tool. This helps researchers use fewer, more relevant variables, to save time and resources. Secondly, we compared prediction functions, and identified the best for the purpose of predicting conflict. Lastly, we tested how parameter values affect the performance of the chosen functions, so as to produce good predictions but also reduce the execution time. The new tools improved both the execution time and the predictive correctness of the system compared to the results obtained prior to our project. It is now nine times faster than before, and its correctness has improved by a factor of three. We believe our work leads to more accurate conflict predictions, and as ViEWS has strong connections to the European Union, we hope that decision makers can benefit from it when trying to prevent conflicts. / I detta projekt, vilket vi valt att benämna ViEWS+, har vi förbättrat olika aspekter av ViEWS (Violence Early-Warning System), ett system som med maskinlärning försöker förutsäga var i världen väpnade konflikter kommer uppstå. Målet med ViEWS är att kunna förutsäga sannolikheten för konflikter så långt som 36 månader i framtiden. Målet med att förutsäga sannoliketen för konflikter är att politiker och beslutsfattare ska kunna använda dessa kunskaper för att förhindra dem.  Indata till systemet är konfliktdata med ett stort antal egenskaper, så som tidigare konflikter, barnadödlighet och urbanisering. Dessa är av varierande användbarhet, vilket skapar ett behov för att sålla ut de som inte är användbara för att förutsäga framtida konflikter. Innan vårt projekt har forskarna som använder ViEWS valt ut egenskaper för hand, vilket blir allt svårare i och med att fler introduceras. Forskargruppen hade även ingen formell metodik för att välja parametervärden till de maskinlärningsfunktioner de använder. De valde parametrar baserat på erfarenhet och känsla, något som kan leda till onödigt långa exekveringstider och eventuellt sämre resultat beroende på funktionen som används. Våra mål med projektet var att förbättra systemets produktivitet, i termer av exekveringstid och säkerheten i förutsägelserna. För att uppnå detta utvecklade vi analysverktyg för att försöka lösa de existerande problemen. Vi har utvecklat ett verktyg för att välja ut färre, mer användbara, egenskaper från datasamlingen. Detta gör att egenskaper som inte tillför någon viktig information kan sorteras bort vilket sparar exekveringstid. Vi har även jämfört prestandan hos olika maskinlärningsfunktioner, för att identifiera de bäst lämpade för konfliktprediktion. Slutligen har vi implementerat ett verktyg för att analysera hur resultaten från funktionerna varierar efter valet av parametrar. Detta gör att man systematiskt kan bestämma vilka parametervärden som bör väljas för att garantera bra resultat samtidigt som exekveringstid hålls nere. Våra resultat visar att med våra förbättringar sänkes exekveringstiden med en faktor av omkring nio och förutsägelseförmågorna höjdes med en faktor av tre. Vi hoppas att vårt arbete kan leda till säkrare föutsägelser och vilket i sin tur kanske leder till en fredligare värld.
316

Técnicas de machine learning aplicadas na recuperação de crédito do mercado brasileiro

Forti, Melissa 08 August 2018 (has links)
Submitted by Melissa Forti (melissaforti@gmail.com) on 2018-09-03T12:07:02Z No. of bitstreams: 1 Melissa_Forti_dissertacao.pdf: 2661806 bytes, checksum: a588904f04c4b3d523f82e716231ffd6 (MD5) / Approved for entry into archive by Joana Martorini (joana.martorini@fgv.br) on 2018-09-03T17:14:01Z (GMT) No. of bitstreams: 1 Melissa_Forti_dissertacao.pdf: 2661806 bytes, checksum: a588904f04c4b3d523f82e716231ffd6 (MD5) / Approved for entry into archive by Suzane Guimarães (suzane.guimaraes@fgv.br) on 2018-09-04T13:30:27Z (GMT) No. of bitstreams: 1 Melissa_Forti_dissertacao.pdf: 2661806 bytes, checksum: a588904f04c4b3d523f82e716231ffd6 (MD5) / Made available in DSpace on 2018-09-04T13:30:28Z (GMT). No. of bitstreams: 1 Melissa_Forti_dissertacao.pdf: 2661806 bytes, checksum: a588904f04c4b3d523f82e716231ffd6 (MD5) Previous issue date: 2018-08-08 / A necessidade de conhecer o cliente sempre foi um diferencial para o mercado e nestes últimos anos vivenciamos um crescimento exponencial de informações e técnicas que promovem a avaliação para todas as fases do ciclo de crédito, desde a prospecção até a recuperação de dívidas. Nesse contexto, as empresas estão investindo cada vez mais em métodos de Machine Learning para que possam extrair o máximo de informações e assim terem processos mais assertivos e rentáveis. No entanto, essas técnicas possuem ainda alguma desconfiança no ambiente financeiro. Diante desse contexto, o objetivo desse trabalho foi aplicar as técnicas de Machine Learning: Random Forest, Support Vector Machine e Gradient Boosting para um banco de dados real de cobrança, a fim de identificar os clientes mais propensos a quitar suas dívidas (Collection Score) e comparar a acurácia e interpretação desses modelos com a metodologia tradicional de Regressão Logística. A principal contribuição desse trabalho está relacionada com a comparação das técnicas em um cenário de recuperação de crédito considerando as principais características, vantagens e desvantagens. / The need to know the customer has always been a differential for the market, and in currently years we have experienced an exponential growth of information and techniques that promote this evaluation for all phases of the credit cycle, from prospecting to debt recovery. In this context, companies are increasingly investing in Machine Learning methods, so that they can extract the maximum information and thus have more assertive and profitable processes. However, these models still have a lot of distrust in the financial environment. Given this need and uncertainty, the objective of this work was to apply the Machine Learning techniques: Random Forest, Support Vector Machine and Gradient Boosting to a real collection database in order to identify the recover clients (Collection Score) and to compare the accuracy and interpretation of these models with the classical logistic regression methodology. The main contribution of this work is related to the comparison of the techniques and if they are suitable for this application, considering its main characteristics, pros and cons.
317

Household’s energy consumption and productionforecasting: A Multi-step ahead forecast strategiescomparison.

Martín-Roldán Villanueva, Gonzalo January 2017 (has links)
In a changing global energy market where the decarbonization of the economy and the demand growth are pushing to look for new models away from the existing centralized non-renewable based grid. To do so, households have to take a ‘prosumer’ role; to help them take optimal actions is needed a multi-step ahead forecast of their expected energy production and consumption. In multi-step ahead forecasting there are different strategies to perform the forecast. The single-output: Recursive, Direct, DirRec, and the multi-output: MIMO and DIRMO. This thesis performs a comparison between the performance of the differents strategies in a ‘prosumer’ household; using Artificial Neural Networks, Random Forest and K-Nearest Neighbours Regression to forecast both solar energy production and grid input. The results of this thesis indicates that the methodology proposed performs better than state of the art models in a more detailed household energy consumption dataset. They also indicate that the strategy and model of choice is problem dependent and a strategy selection step should be added to the forecasting methodology. Additionally, the performance of the Recursive strategy is always far from the best while the DIRMO strategy performs similarly. This makes the latter a suitable option for exploratory analysis.
318

Evaluating Multitemporal Sentinel-2 data for Forest Mapping using Random Forest

Nelson, Marc January 2017 (has links)
The mapping of land cover using remotely sensed data is most effective when a robust classification method is employed. Random forest is a modern machine learning algorithm that has recently gained interest in the field of remote sensing due to its non-parametric nature, which may be better suited to handle complex, high-dimensional data than conventional techniques. In this study, the random forest method is applied to remote sensing data from the European Space Agency’s new Sentinel-2 satellite program, which was launched in 2015 yet remains relatively untested in scientific literature using non-simulated data. In a study site of boreo-nemoral forest in Ekerö mulicipality, Sweden, a classification is performed for six forest classes based on CadasterENV Sweden, a multi-purpose land covermapping and change monitoring program. The performance of Sentinel-2’s Multi-SpectralImager is investigated in the context of time series to capture phenological conditions, optimal band combinations, as well as the influence of sample size and ancillary inputs.Using two images from spring and summer of 2016, an overall map accuracy of 86.0% was achieved. The red edge, short wave infrared, and visible red bands were confirmed to be of high value. Important factors contributing to the result include the timing of image acquisition, use of a feature reduction approach to decrease the correlation between spectral channels, and the addition of ancillary data that combines topographic and edaphic information. The results suggest that random forest is an effective classification technique that is particularly well suited to high-dimensional remote sensing data.
319

Bedömning av fakturor med hjälp av maskininlärning / Invoice Classification using Machine Learning

Hjalmarsson, Martin, Björkman, Mikael January 2017 (has links)
Factoring innebär försäljning av fakturor till tredjepart och därmed möjlighet att få in kapital snabbt och har blivit alltmer populärt bland företag idag. Ett fakturaköp innebär en viss kreditrisk för företaget i de fall som fakturan inte blir betald och som köpare av kapital önskar man att minimera den risken. Aros Kapital erbjuder sina kunder tjänsten factoring. Under detta projekt undersöks möjligheten att använda maskininlärningsmetoder för att bedöma om en faktura är en bra eller dålig investering. Om maskininlärningen visar sig vara bättre än manuell hantering kan även bättre resultat uppnås i form av minskade kreditförluster, köp av fler fakturor och därmed ökad vinst. Fyra maskininlärningsmetoder jämfördes: beslutsträd, slumpmässig skog, Adaboost och djupa neurala nätverk. Utöver jämförelse sinsemellan har metoderna jämförts med Aros befintliga beslut och nuvarande regelmotor. Av de jämförda maskininlärningsmetoderna presterade slumpmässig skog bäst och visade sig bättre än Aros befintliga beslut på de testade fakturorna, slumpmässig skog fick F1-poängen 0,35 och Aros 0,22 . / Today, companies can sell their invoices to a third party in order to to quickly capitalize them. This is called factoring. For the financial institute which serve as the third party, the purchase of an invoice infers a certain risk in case the invoice is not paid, a risk the financial institute would like to minimize. Aros Kapital is a financial institute that offers factoring as one of their services. This project at Aros Kapital evaluated the possibility of using machine learning to determine whether or not an invoice will be good investment for the financial institute. If the machine learning algorithm performs better than manual handling and by minimizing credit losses and buying more invoices this could lead to an increase in profit for Aros. Four machine learning algorithms have been compared: decision trees, random forest, Adaboost and deep neural network. Beyond the comparison between the four algorithms, the algorithms were also compared with Aros actual decision and Aros current rule engine solution. The  results show that random forest is the best performing algorithm and it also shows a slight improvement on performance compared to Aros actual decision, random forest got an F1- core of 0.35 and Aros 0.22.
320

Strategies for Combining Tree-Based Ensemble Models

Zhang, Yi 01 January 2017 (has links)
Ensemble models have proved effective in a variety of classification tasks. These models combine the predictions of several base models to achieve higher out-of-sample classification accuracy than the base models. Base models are typically trained using different subsets of training examples and input features. Ensemble classifiers are particularly effective when their constituent base models are diverse in terms of their prediction accuracy in different regions of the feature space. This dissertation investigated methods for combining ensemble models, treating them as base models. The goal is to develop a strategy for combining ensemble classifiers that results in higher classification accuracy than the constituent ensemble models. Three of the best performing tree-based ensemble methods – random forest, extremely randomized tree, and eXtreme gradient boosting model – were used to generate a set of base models. Outputs from classifiers generated by these methods were then combined to create an ensemble classifier. This dissertation systematically investigated methods for (1) selecting a set of diverse base models, and (2) combining the selected base models. The methods were evaluated using public domain data sets which have been extensively used for benchmarking classification models. The research established that applying random forest as the final ensemble method to integrate selected base models and factor scores of multiple correspondence analysis turned out to be the best ensemble approach.

Page generated in 0.0541 seconds