Inter-rater reliability and review of the VA unresolved narratives.

J. C. Eagon, J. F. Hurdle, M. J. Lincoln

Research output: Contribution to journalArticlepeer-review

5 Scopus citations


To better understand how VA clinicians use medical vocabulary in every day practice, we set out to characterize terms generated in the Problem List module of the VA's DHCP system that were not mapped to terms in the controlled-vocabulary lexicon of DHCP. When entered terms fail to match those in the lexicon, a note is sent to a central repository. When our study started, the volume in that repository had reached 16,783 terms. We wished to characterize the potential reasons why these terms failed to match terms in the lexicon. After examining two small samples of randomly selected terms, we used group consensus to develop a set of rating criteria and a rating form. To be sure that the results of multiple reviewers could be confidently compared, we analyzed the inter-rater agreement of our rating process. Two rates used this form to rate the same 400 terms. We found that modifiers and numeric data were common and consistent reasons for failure to match, while others such as use of synonyms and absence of the concept from the lexicon were common but less consistently selected.

Original languageEnglish
Pages (from-to)130-134
Number of pages5
JournalProceedings : a conference of the American Medical Informatics Association / ... AMIA Annual Fall Symposium. AMIA Fall Symposium
StatePublished - 1996


Dive into the research topics of 'Inter-rater reliability and review of the VA unresolved narratives.'. Together they form a unique fingerprint.

Cite this