We study properties of algorithms which minimize (or almost minimize) empirical error over a Donsker class of functions. We show that the L2-diameter of the set of almost-minimizers is converging to zero in probability. Therefore, as the number of samples grows, it is becoming unlikely that adding a point (or a number of points) to the training set will result in a large jump (in L2 distance) to a new hypothesis. We also show that under some conditions the expected errors of the almost-minimizers are becoming close with a rate faster than n^{-1/2}.
Identifer | oai:union.ndltd.org:MIT/oai:dspace.mit.edu:1721.1/30545 |
Date | 17 May 2005 |
Creators | Caponnetto, Andrea, Rakhlin, Alexander |
Source Sets | M.I.T. Theses and Dissertation |
Language | en_US |
Detected Language | English |
Format | 9 p., 7033622 bytes, 434782 bytes, application/postscript, application/pdf |
Relation | Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory |
Page generated in 0.0017 seconds