Crowdsourced Assessment of Surgical Skill Proficiency in Cataract Surgery

Grace L. Paley, Rebecca Grove, Tejas C. Sekhar, Jack Pruett, Michael V. Stock, Tony N. Pira, Steven M. Shields, Evan L. Waxman, Bradley S. Wilson, Mae O. Gordon, Susan M. Culican

Research output: Contribution to journalArticlepeer-review

9 Scopus citations


Objective: To test whether crowdsourced lay raters can accurately assess cataract surgical skills. Design: Two-armed study: independent cross-sectional and longitudinal cohorts. Setting: Washington University Department of Ophthalmology. Participants and Methods: Sixteen cataract surgeons with varying experience levels submitted cataract surgery videos to be graded by 5 experts and 300+ crowdworkers masked to surgeon experience. Cross-sectional study: 50 videos from surgeons ranging from first-year resident to attending physician, pooled by years of training. Longitudinal study: 28 videos obtained at regular intervals as residents progressed through 180 cases. Surgical skill was graded using the modified Objective Structured Assessment of Technical Skill (mOSATS). Main outcome measures were overall technical performance, reliability indices, and correlation between expert and crowd mean scores. Results: Experts demonstrated high interrater reliability and accurately predicted training level, establishing construct validity for the modified OSATS. Crowd scores were correlated with (r = 0.865, p < 0.0001) but consistently higher than expert scores for first, second, and third-year residents (p < 0.0001, paired t-test). Longer surgery duration negatively correlated with training level (r = -0.855, p < 0.0001) and expert score (r = -0.927, p < 0.0001). The longitudinal dataset reproduced cross-sectional study findings for crowd and expert comparisons. A regression equation transforming crowd score plus video length into expert score was derived from the cross-sectional dataset (r2 = 0.92) and demonstrated excellent predictive modeling when applied to the independent longitudinal dataset (r2 = 0.80). A group of student raters who had edited the cataract videos also graded them, producing scores that more closely approximated experts than the crowd. Conclusions: Crowdsourced rankings correlated with expert scores, but were not equivalent; crowd scores overestimated technical competency, especially for novice surgeons. A novel approach of adjusting crowd scores with surgery duration generated a more accurate predictive model for surgical skill. More studies are needed before crowdsourcing can be reliably used for assessing surgical proficiency.

Original languageEnglish
Pages (from-to)1077-1088
Number of pages12
JournalJournal of Surgical Education
Issue number4
StatePublished - Jul 1 2021


  • Crowdsourcing
  • cataract surgery
  • phacoemulsification
  • surgical assessment
  • surgical competence


Dive into the research topics of 'Crowdsourced Assessment of Surgical Skill Proficiency in Cataract Surgery'. Together they form a unique fingerprint.

Cite this