Return to search

Sharing visual features for multiclass and multiview object detection

We consider the problem of detecting a large number of different classes of objects in cluttered scenes. Traditional approaches require applying a battery of different classifiers to the image, at multiple locations and scales. This can be slow and can require a lot of training data, since each classifier requires the computation of many different image features. In particular, for independently trained detectors, the (run-time) computational complexity, and the (training-time) sample complexity, scales linearly with the number of classes to be detected. It seems unlikely that such an approach will scale up to allow recognition of hundreds or thousands of objects.We present a multi-class boosting procedure (joint boosting) that reduces the computational and sample complexity, by finding common features that can be shared across the classes (and/or views). The detectors for each class are trained jointly, rather than independently. For a given performance level, the total number of features required, and therefore the computational cost, is observed to scale approximately logarithmically with the number of classes. The features selected jointly are closer to edges and generic features typical of many natural structures instead of finding specific object parts. Those generic features generalize better and reduce considerably the computational cost of an algorithm for multi-class object detection.

Identiferoai:union.ndltd.org:MIT/oai:dspace.mit.edu:1721.1/30399
Date14 April 2004
CreatorsTorralba, Antonio, Murphy, Kevin P., Freeman, William T.
Source SetsM.I.T. Theses and Dissertation
Languageen_US
Detected LanguageEnglish
Format17 p., 24172096 bytes, 1434721 bytes, application/postscript, application/pdf
RelationMassachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory

Page generated in 0.0028 seconds