Return to search

Inner Ensembles: Using Ensemble Methods in Learning Step

A pivotal moment in machine learning research was the creation of an important new
research area, known as Ensemble Learning. In this work, we argue that ensembles are
a very general concept, and though they have been widely used, they can be applied in
more situations than they have been to date. Rather than using them only to combine
the output of an algorithm, we can apply them to decisions made inside the algorithm
itself, during the learning step. We call this approach Inner Ensembles. The motivation
to develop Inner Ensembles was the opportunity to produce models with the similar
advantages as regular ensembles, accuracy and stability for example, plus additional
advantages such as comprehensibility, simplicity, rapid classification and small memory
footprint. The main contribution of this work is to demonstrate how broadly this idea
can be applied, and highlight its potential impact on all types of algorithms. To support
our claim, we first provide a general guideline for applying Inner Ensembles to different algorithms. Then, using this framework, we apply them to two categories of learning
methods: supervised and un-supervised. For the former we chose Bayesian network, and
for the latter K-Means clustering. Our results show that 1) the overall performance of
Inner Ensembles is significantly better than the original methods, and 2) Inner Ensembles
provide similar performance improvements as regular ensembles.

Identiferoai:union.ndltd.org:LACETR/oai:collectionscanada.gc.ca:OOU.#10393/31127
Date16 May 2014
CreatorsAbbasian, Houman
Source SetsLibrary and Archives Canada ETDs Repository / Centre d'archives des thèses électroniques de Bibliothèque et Archives Canada
LanguageEnglish
Detected LanguageEnglish
TypeThèse / Thesis

Page generated in 0.0022 seconds