Derivation of two critical appraisal scores for trainees to evaluate online educational resources: A METRIQ study

Teresa M. Chan, Brent Thoma, Keeth Krishnan, Michelle Lin, Christopher R. Carpenter, Matt Astin, Kulamakan Kulasegaram

Research output: Contribution to journalArticle

19 Scopus citations

Abstract

Introduction: Online education resources (OERs), like blogs and podcasts, increasingly augment or replace traditional medical education resources such as textbooks and lectures. Trainees' ability to evaluate these resources is poor, and few quality assessment aids have been developed to assist them. This study aimed to derive a quality evaluation instrument for this purpose. Methods: We used a three-phase methodology. In Phase 1, a previously derived list of 151 OER quality indicators was reduced to 13 items using data from published consensus-building studies (of medical educators, expert podcasters, and expert bloggers) and subsequent evaluation by our team. In Phase 2, these 13 items were converted to seven-point Likert scales used by trainee raters (n=40) to evaluate 39 OERs. The reliability and usability of these 13 rating items was determined using responses from trainee raters, and top items were used to create two OER quality evaluation instruments. In Phase 3, these instruments were compared to an external certification process (the ALiEM AIR certification) and the gestalt evaluation of the same 39 blog posts by 20 faculty educators. Results: Two quality-evaluation instruments were derived with fair inter-rater reliability: the METRIQ-8 Score (Inter class correlation coefficient [ICC]=0.30, p<0.001) and the METRIQ-5 Score (ICC=0.22, p<0.001). Both scores, when calculated using the derivation data, correlated with educator gestalt (Pearson's r=0.35, p=0.03 and r=0.41, p<0.01, respectively) and were related to increased odds of receiving an ALiEM AIR certification (odds ratio=1.28, p=0.03; OR=1.5, p=0.004, respectively). Conclusion: Two novel scoring instruments with adequate psychometric properties were derived to assist trainees in evaluating OER quality and correlated favourably with gestalt ratings of online educational resources by faculty educators. Further testing is needed to ensure these instruments are accurate when applied by trainees. [West J Emerg Med. 2016;17(5)574-584.]

Original languageEnglish
Pages (from-to)574-584
Number of pages11
JournalWestern Journal of Emergency Medicine
Volume17
Issue number5
DOIs
StatePublished - Sep 2016

Fingerprint Dive into the research topics of 'Derivation of two critical appraisal scores for trainees to evaluate online educational resources: A METRIQ study'. Together they form a unique fingerprint.

  • Cite this