Spelling suggestions: "subject:"inner bymeans"" "subject:"inner coreans""
1 |
Inner Ensembles: Using Ensemble Methods in Learning StepAbbasian, Houman 16 May 2014 (has links)
A pivotal moment in machine learning research was the creation of an important new
research area, known as Ensemble Learning. In this work, we argue that ensembles are
a very general concept, and though they have been widely used, they can be applied in
more situations than they have been to date. Rather than using them only to combine
the output of an algorithm, we can apply them to decisions made inside the algorithm
itself, during the learning step. We call this approach Inner Ensembles. The motivation
to develop Inner Ensembles was the opportunity to produce models with the similar
advantages as regular ensembles, accuracy and stability for example, plus additional
advantages such as comprehensibility, simplicity, rapid classification and small memory
footprint. The main contribution of this work is to demonstrate how broadly this idea
can be applied, and highlight its potential impact on all types of algorithms. To support
our claim, we first provide a general guideline for applying Inner Ensembles to different algorithms. Then, using this framework, we apply them to two categories of learning
methods: supervised and un-supervised. For the former we chose Bayesian network, and
for the latter K-Means clustering. Our results show that 1) the overall performance of
Inner Ensembles is significantly better than the original methods, and 2) Inner Ensembles
provide similar performance improvements as regular ensembles.
|
2 |
Inner Ensembles: Using Ensemble Methods in Learning StepAbbasian, Houman January 2014 (has links)
A pivotal moment in machine learning research was the creation of an important new
research area, known as Ensemble Learning. In this work, we argue that ensembles are
a very general concept, and though they have been widely used, they can be applied in
more situations than they have been to date. Rather than using them only to combine
the output of an algorithm, we can apply them to decisions made inside the algorithm
itself, during the learning step. We call this approach Inner Ensembles. The motivation
to develop Inner Ensembles was the opportunity to produce models with the similar
advantages as regular ensembles, accuracy and stability for example, plus additional
advantages such as comprehensibility, simplicity, rapid classification and small memory
footprint. The main contribution of this work is to demonstrate how broadly this idea
can be applied, and highlight its potential impact on all types of algorithms. To support
our claim, we first provide a general guideline for applying Inner Ensembles to different algorithms. Then, using this framework, we apply them to two categories of learning
methods: supervised and un-supervised. For the former we chose Bayesian network, and
for the latter K-Means clustering. Our results show that 1) the overall performance of
Inner Ensembles is significantly better than the original methods, and 2) Inner Ensembles
provide similar performance improvements as regular ensembles.
|
Page generated in 0.0268 seconds