We study online active learning under the Bernstein condition for bounded general
losses and offer a solution for online variance estimation. Our suggested algorithm
is based on IWAL (Importance Weighted Active Learning) which utilizes the online
variance estimation technique to shrink the hypothesis set. For our algorithm, we
provide a fallback guarantee and prove that in the case that R(f*) is small, it will
converge faster than passive learning, where R(f*) is the risk of the best hypothesis
in the hypothesis class. Finally, in the special case of zero-one loss exponential
improvement is achieved in label complexity over passive learning. / Graduate
Identifer | oai:union.ndltd.org:uvic.ca/oai:dspace.library.uvic.ca:1828/12075 |
Date | 31 August 2020 |
Creators | Shayestehmanesh, Hamid |
Contributors | Mehta, Nishant |
Source Sets | University of Victoria |
Language | English, English |
Detected Language | English |
Type | Thesis |
Format | application/pdf |
Rights | Available to the World Wide Web |
Page generated in 0.0014 seconds