TY - JOUR
T1 - Interobserver Agreement Among Uveitis Experts on Uveitic Diagnoses
T2 - The Standardization of Uveitis Nomenclature Experience
AU - Standardization of Uveitis Nomenclature Working Group
AU - Jabs, Douglas A.
AU - Dick, Andrew
AU - Doucette, John T.
AU - Gupta, Amod
AU - Lightman, Susan
AU - McCluskey, Peter
AU - Okada, Annabelle A.
AU - Palestine, Alan G.
AU - Rosenbaum, James T.
AU - Saleem, Sophia M.
AU - Thorne, Jennifer
AU - Trusko, Brett
AU - Biswas, Jyotirmay
AU - Ohno, Shigeaki
AU - Mochizuki, Manabu
AU - Chee, Soon Phaik
AU - Prabriputaloong, Tisha
AU - Smith, Justine
AU - Stawell, Richard
AU - Lim, Lyndell
AU - Zamir, Ehud
AU - Wakefield, Dennis
AU - Barisani-Asenbauer, Talin
AU - Caspers, Laure
AU - Bodaghi, Bahram
AU - LeHoang, Phuc
AU - Brezin, Antoine
AU - Zierhut, Manfred
AU - Accorinti, Massimo
AU - Pivetti-Pezzi, Paola
AU - Neri, Piergiorgio
AU - Baarsma, Seerp
AU - Rothova, Aniki
AU - Becker, Matthias
AU - de Smet, Marc
AU - Graham, Elizabeth
AU - Forrester, John
AU - Murray, Philip
AU - Amer, Radgonde
AU - Kramer, Michal
AU - Habot-Wilner, Zohar
AU - Atmaca, Leyla
AU - Deschenes, Jean
AU - Belair, Marie Lyne
AU - Davis, Janet
AU - Galor, Anat
AU - Lowder, Careen
AU - Srivastava, Sunil
AU - Zegans, Michael
AU - Margolis, Todd
N1 - Publisher Copyright:
© 2017
PY - 2018/2/1
Y1 - 2018/2/1
N2 - Purpose To evaluate the interobserver agreement among uveitis experts on the diagnosis of the specific uveitic disease. Design Interobserver agreement analysis. Methods Five committees, each comprised of 9 individuals and working in parallel, reviewed cases from a preliminary database of 25 uveitic diseases, collected by disease, and voted independently online whether the case was the disease in question or not. The agreement statistic, κ, was calculated for the 36 pairwise comparisons for each disease, and a mean κ was calculated for each disease. After the independent online voting, committee consensus conference calls, using nominal group techniques, reviewed all cases not achieving supermajority agreement (>75%) on the diagnosis in the online voting to attempt to arrive at a supermajority agreement. Results A total of 5766 cases for the 25 diseases were evaluated. The overall mean κ for the entire project was 0.39, with disease-specific variation ranging from 0.23 to 0.79. After the formalized consensus conference calls to address cases that did not achieve supermajority agreement in the online voting, supermajority agreement overall was reached on approximately 99% of cases, with disease-specific variation ranging from 96% to 100%. Conclusions Agreement among uveitis experts on diagnosis is moderate at best but can be improved by discussion among them. These data suggest the need for validated and widely used classification criteria in the field of uveitis.
AB - Purpose To evaluate the interobserver agreement among uveitis experts on the diagnosis of the specific uveitic disease. Design Interobserver agreement analysis. Methods Five committees, each comprised of 9 individuals and working in parallel, reviewed cases from a preliminary database of 25 uveitic diseases, collected by disease, and voted independently online whether the case was the disease in question or not. The agreement statistic, κ, was calculated for the 36 pairwise comparisons for each disease, and a mean κ was calculated for each disease. After the independent online voting, committee consensus conference calls, using nominal group techniques, reviewed all cases not achieving supermajority agreement (>75%) on the diagnosis in the online voting to attempt to arrive at a supermajority agreement. Results A total of 5766 cases for the 25 diseases were evaluated. The overall mean κ for the entire project was 0.39, with disease-specific variation ranging from 0.23 to 0.79. After the formalized consensus conference calls to address cases that did not achieve supermajority agreement in the online voting, supermajority agreement overall was reached on approximately 99% of cases, with disease-specific variation ranging from 96% to 100%. Conclusions Agreement among uveitis experts on diagnosis is moderate at best but can be improved by discussion among them. These data suggest the need for validated and widely used classification criteria in the field of uveitis.
UR - http://www.scopus.com/inward/record.url?scp=85037650570&partnerID=8YFLogxK
U2 - 10.1016/j.ajo.2017.10.028
DO - 10.1016/j.ajo.2017.10.028
M3 - Article
C2 - 29122577
AN - SCOPUS:85037650570
SN - 0002-9394
VL - 186
SP - 19
EP - 24
JO - American journal of ophthalmology
JF - American journal of ophthalmology
ER -