TY - JOUR
T1 - Interactive machine learning-based multi-label segmentation of solid tumors and organs
AU - Bounias, Dimitrios
AU - Singh, Ashish
AU - Bakas, Spyridon
AU - Pati, Sarthak
AU - Rathore, Saima
AU - Akbari, Hamed
AU - Bilello, Michel
AU - Greenberger, Benjamin A.
AU - Lombardo, Joseph
AU - Chitalia, Rhea D.
AU - Jahani, Nariman
AU - Gastounioti, Aimilia
AU - Hershman, Michelle
AU - Roshkovan, Leonid
AU - Katz, Sharyn I.
AU - Yousefi, Bardia
AU - Lou, Carolyn
AU - Simpson, Amber L.
AU - Do, Richard K.G.
AU - Shinohara, Russell T.
AU - Kontos, Despina
AU - Nikita, Konstantina
AU - Davatzikos, Christos
N1 - Funding Information:
Funding: Research reported in this publication was partly supported by the National Institutes of Health (NIH) under award numbers NIH/NCI:U24CA189523, NIH/NCI:U01CA242871, and NIH/NINDS:R01NS042645. The content of this publication is solely the responsibility of the authors and does not represent the official views of the NIH.
Publisher Copyright:
© 2021 by the authors. Licensee MDPI, Basel, Switzerland.
PY - 2021/8/2
Y1 - 2021/8/2
N2 - We seek the development and evaluation of a fast, accurate, and consistent method for general-purpose segmentation, based on interactive machine learning (IML). To validate our method, we identified retrospective cohorts of 20 brain, 50 breast, and 50 lung cancer patients, as well as 20 spleen scans, with corresponding ground truth annotations. Utilizing very brief user training annotations and the adaptive geodesic distance transform, an ensemble of SVMs is trained, providing a patient-specific model applied to the whole image. Two experts segmented each cohort twice with our method and twice manually. The IML method was faster than manual annotation by 53.1% on average. We found significant (p < 0.001) overlap difference for spleen (DiceIML/Dice Manual = 0.91/0.87), breast tumors (DiceIML/DiceManual = 0.84/0.82), and lung nodules (DiceIML/DiceManual = 0.78/0.83). For intra-rater consistency, a significant (p = 0.003) difference was found for spleen (DiceIML/DiceManual = 0.91/0.89). For inter-rater consistency, significant (p < 0.045) differences were found for spleen (DiceIML/DiceManual = 0.91/0.87), breast (DiceIML/DiceManual = 0.86/0.81), lung (DiceIML/DiceManual = 0.85/0.89), the non-enhancing (DiceIML/DiceManual = 0.79/0.67) and the enhancing (DiceIML/DiceManual = 0.79/0.84) brain tumor sub-regions, which, in aggregation, favored our method. Quantitative evaluation for speed, spatial overlap, and consistency, reveals the benefits of our proposed method when compared with manual annotation, for several clinically relevant problems. We publicly release our implementation through CaPTk (Cancer Imaging Phenomics Toolkit) and as an MITK plugin.
AB - We seek the development and evaluation of a fast, accurate, and consistent method for general-purpose segmentation, based on interactive machine learning (IML). To validate our method, we identified retrospective cohorts of 20 brain, 50 breast, and 50 lung cancer patients, as well as 20 spleen scans, with corresponding ground truth annotations. Utilizing very brief user training annotations and the adaptive geodesic distance transform, an ensemble of SVMs is trained, providing a patient-specific model applied to the whole image. Two experts segmented each cohort twice with our method and twice manually. The IML method was faster than manual annotation by 53.1% on average. We found significant (p < 0.001) overlap difference for spleen (DiceIML/Dice Manual = 0.91/0.87), breast tumors (DiceIML/DiceManual = 0.84/0.82), and lung nodules (DiceIML/DiceManual = 0.78/0.83). For intra-rater consistency, a significant (p = 0.003) difference was found for spleen (DiceIML/DiceManual = 0.91/0.89). For inter-rater consistency, significant (p < 0.045) differences were found for spleen (DiceIML/DiceManual = 0.91/0.87), breast (DiceIML/DiceManual = 0.86/0.81), lung (DiceIML/DiceManual = 0.85/0.89), the non-enhancing (DiceIML/DiceManual = 0.79/0.67) and the enhancing (DiceIML/DiceManual = 0.79/0.84) brain tumor sub-regions, which, in aggregation, favored our method. Quantitative evaluation for speed, spatial overlap, and consistency, reveals the benefits of our proposed method when compared with manual annotation, for several clinically relevant problems. We publicly release our implementation through CaPTk (Cancer Imaging Phenomics Toolkit) and as an MITK plugin.
KW - Artificial intelligence
KW - Artificial intelligence segmentation
KW - Computer tomography
KW - Computer tomography
KW - Image segmentation
KW - Magnetic resonance imaging
KW - Magnetic resonance imaging
UR - http://www.scopus.com/inward/record.url?scp=85113781900&partnerID=8YFLogxK
U2 - 10.3390/app11167488
DO - 10.3390/app11167488
M3 - Article
C2 - 34621541
AN - SCOPUS:85113781900
VL - 11
JO - Applied Sciences (Switzerland)
JF - Applied Sciences (Switzerland)
SN - 2076-3417
IS - 16
M1 - 7488
ER -