TY - GEN
T1 - Hand-Crafted Feature Guided Histologic Image Classification via Weak-to-Strong Generalization
AU - Lu, Changjie
AU - Fan, Zong
AU - Wang, Zhimin
AU - Anastasio, Mark
AU - Sun, Lulu
AU - Wang, Xiaowei
AU - Li, Hua
N1 - Publisher Copyright:
© 2025 SPIE.
PY - 2025
Y1 - 2025
N2 - Deep learning (DL) has demonstrated outstanding performance in histologic image classification due to its remarkable capability to capture complex, non-linear discriminative patterns. However, these models often lack interpretability in clinical settings. Handcrafted features, such as gradients, lesion density, and size, are directly derived from regions of interest, providing explicit interpretability. While previous methods have attempted to fuse these two feature types to enhance classification, they have not fully explored their intrinsic relationships. In this paper, we introduced a novel Weak-to-Strong Generalization (WSG) histologic image classification framework that effectively integrates hand-crafted and DL features. By leveraging information theory, we quantify their relationships, providing a deeper understanding of feature interpretability. We propose an adaptive bootstrapping WSG loss that enables a self-supervised model (strong) to learn high-confidence predictions from a decision tree model trained with hand-crafted features (weak). This approach allows the strong model to obtain supervision from the interpretable weak model while preserving its own knowledge, thereby improving classification accuracy and feature interpretability. We employed multiple weak models (LightGBM, Random Forest, XGBoost) and strong models (ResNet, VGG, MobileNet) to demonstrate the robustness of the WSG framework. This study offers a new paradigm for designing interpretable deep learning approaches in pathology research, bridging the gap between traditional feature engineering and modern DL models.
AB - Deep learning (DL) has demonstrated outstanding performance in histologic image classification due to its remarkable capability to capture complex, non-linear discriminative patterns. However, these models often lack interpretability in clinical settings. Handcrafted features, such as gradients, lesion density, and size, are directly derived from regions of interest, providing explicit interpretability. While previous methods have attempted to fuse these two feature types to enhance classification, they have not fully explored their intrinsic relationships. In this paper, we introduced a novel Weak-to-Strong Generalization (WSG) histologic image classification framework that effectively integrates hand-crafted and DL features. By leveraging information theory, we quantify their relationships, providing a deeper understanding of feature interpretability. We propose an adaptive bootstrapping WSG loss that enables a self-supervised model (strong) to learn high-confidence predictions from a decision tree model trained with hand-crafted features (weak). This approach allows the strong model to obtain supervision from the interpretable weak model while preserving its own knowledge, thereby improving classification accuracy and feature interpretability. We employed multiple weak models (LightGBM, Random Forest, XGBoost) and strong models (ResNet, VGG, MobileNet) to demonstrate the robustness of the WSG framework. This study offers a new paradigm for designing interpretable deep learning approaches in pathology research, bridging the gap between traditional feature engineering and modern DL models.
KW - Feature Interpretabilty
KW - Weak-to-Strong Generalization
KW - Whole Slide Image
UR - https://www.scopus.com/pages/publications/105004795852
U2 - 10.1117/12.3047492
DO - 10.1117/12.3047492
M3 - Conference contribution
AN - SCOPUS:105004795852
T3 - Progress in Biomedical Optics and Imaging - Proceedings of SPIE
BT - Medical Imaging 2025
A2 - Tomaszewski, John E.
A2 - Ward, Aaron D.
PB - SPIE
T2 - Medical Imaging 2025: Digital and Computational Pathology
Y2 - 18 February 2025 through 20 February 2025
ER -