Automatic feature decomposition for single view co-training

Minmin Chen, Kilian Q. Weinberger, Yixin Chen

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

81 Scopus citations

Abstract

One of the most successful semi-supervised learning approaches is co-training for multiview data. In co-training, one trains two classifiers, one for each view, and uses the most confident predictions of the unlabeled data for the two classifiers to "teach each other". In this paper, we extend co-training to learning scenarios without an explicit multi-view representation. Inspired by a theoretical analysis of Balcan et al. (2004), we introduce a novel algorithm that splits the feature space during learning, explicitly to encourage co-training to be successful. We demonstrate the efficacy of our proposed method in a weakly-supervised setting on the challenging Caltech-256 object recognition task, where we improve significantly over previous results by (Bergamo & Torresani, 2010) in almost all training-set size settings.

Original languageEnglish
Title of host publicationProceedings of the 28th International Conference on Machine Learning, ICML 2011
Pages953-960
Number of pages8
StatePublished - 2011
Event28th International Conference on Machine Learning, ICML 2011 - Bellevue, WA, United States
Duration: Jun 28 2011Jul 2 2011

Publication series

NameProceedings of the 28th International Conference on Machine Learning, ICML 2011

Conference

Conference28th International Conference on Machine Learning, ICML 2011
Country/TerritoryUnited States
CityBellevue, WA
Period06/28/1107/2/11

Fingerprint

Dive into the research topics of 'Automatic feature decomposition for single view co-training'. Together they form a unique fingerprint.

Cite this