Large margin local estimate with applications to medical image classification

Yang Song, Weidong Cai, Heng Huang, Yun Zhou, David Dagan Feng, Yue Wang, Michael J. Fulham, Mei Chen

Research output: Contribution to journalArticlepeer-review

68 Scopus citations


Medical images usually exhibit large intra-class variation and inter-class ambiguity in the feature space, which could affect classification accuracy. To tackle this issue, we propose a new Large Margin Local Estimate (LMLE) classification model with sub-categorization based sparse representation. We first sub-categorize the reference sets of different classes into multiple clusters, to reduce feature variation within each subcategory compared to the entire reference set. Local estimates are generated for the test image using sparse representation with reference subcategories as the dictionaries. The similarity between the test image and each class is then computed by fusing the distances with the local estimates in a learning-based large margin aggregation construct to alleviate the problem of inter-class ambiguity. The derived similarities are finally used to determine the class label. We demonstrate that our LMLE model is generally applicable to different imaging modalities, and applied it to three tasks: interstitial lung disease (ILD) classification on high-resolution computed tomography (HRCT) images, phenotype binary classification and continuous regression on brain magnetic resonance (MR) imaging. Our experimental results show statistically significant performance improvements over existing popular classifiers.

Original languageEnglish
Article number7014242
Pages (from-to)1362-1377
Number of pages16
JournalIEEE Transactions on Medical Imaging
Issue number6
StatePublished - Jun 1 2015


  • Large margin fusion
  • medical image classification
  • sparse representation
  • sub-categorization


Dive into the research topics of 'Large margin local estimate with applications to medical image classification'. Together they form a unique fingerprint.

Cite this