Return to search

Analysis of Perceptron-Based Active Learning

We start by showing that in an active learning setting, the Perceptron algorithm needs $\Omega(\frac{1}{\epsilon^2})$ labels to learn linear separators within generalization error $\epsilon$. We then present a simple selective sampling algorithm for this problem, which combines a modification of the perceptron update with an adaptive filtering rule for deciding which points to query. For data distributed uniformly over the unit sphere, we show that our algorithm reaches generalization error $\epsilon$ after asking for just $\tilde{O}(d \log \frac{1}{\epsilon})$ labels. This exponential improvement over the usual sample complexity of supervised learning has previously been demonstrated only for the computationally more complex query-by-committee algorithm.

Identiferoai:union.ndltd.org:MIT/oai:dspace.mit.edu:1721.1/30585
Date17 November 2005
CreatorsDasgupta, Sanjoy, Kalai, Adam Tauman, Monteleoni, Claire
Source SetsM.I.T. Theses and Dissertation
Languageen_US
Detected LanguageEnglish
Format15 p., 11491832 bytes, 599624 bytes, application/postscript, application/pdf
RelationMassachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory

Page generated in 0.0044 seconds