ASN's Mission

To create a world without kidney diseases, the ASN Alliance for Kidney Health elevates care by educating and informing, driving breakthroughs and innovation, and advocating for policies that create transformative changes in kidney medicine throughout the world.

learn more

Contact ASN

1401 H St, NW, Ste 900, Washington, DC 20005

email@asn-online.org

202-640-4660

The Latest on X

Kidney Week

Abstract: SA-PO014

Validation of an Electronic Phenotyping Algorithm for Nephrolithiasis

Session Information

Category: Augmented Intelligence, Digital Health, and Data Science

  • 300 Augmented Intelligence, Digital Health, and Data Science

Authors

  • Larson, Nicholas B., Mayo Foundation for Medical Education and Research, Rochester, Minnesota, United States
  • McDonnell, Shannon K., Mayo Foundation for Medical Education and Research, Rochester, Minnesota, United States
  • Ma, Jun, Mayo Foundation for Medical Education and Research, Rochester, Minnesota, United States
  • Frank, Jacob A., Mayo Foundation for Medical Education and Research, Rochester, Minnesota, United States
  • Chang, Alexander R., Geisinger Commonwealth School of Medicine, Scranton, Pennsylvania, United States
  • Bucaloiu, Ion D., Geisinger Commonwealth School of Medicine, Scranton, Pennsylvania, United States
  • Scheinman, Steven J., Geisinger Commonwealth School of Medicine, Scranton, Pennsylvania, United States
  • Harris, Peter C., Mayo Foundation for Medical Education and Research, Rochester, Minnesota, United States
  • Lieske, John C., Mayo Foundation for Medical Education and Research, Rochester, Minnesota, United States
Background

Computable phenotypes using electronic health record (EHR) data are highly useful for facilitating research for various disease conditions, including history of kidney stones (KS). However, evaluating the performance and limitations of phenotyping algorithms is essential.

Methods

We defined a KS phenotyping algorithm using ICD-9/10 and CPT codes. To assess its performance, we designed a phenotyping validation study using EHR data from Mayo Clinic Biobank (MCBB) participants. Gold standard chart abstraction was performed by two readers blinded to the predicted KS status. A random sample of 150 predicted KS cases and 150 predicted controls were abstracted, with phenotyping performance assessed by sensitivity, specificity, PPV, and NPV, with 95% confidence intervals (CIs), adjusted for verification bias. Inter-reader reliability was assessed on 80 participants evaluated by both readers via Cohen’s k. Finally, external validation was performed on a random sample from the Geisinger MyCode/DiscovEHR participants.

Results

Among 46,207 MCBB participants eligible for our study, 3917 (7.9%) were screen positive using the KS algorithm. For the 80 MCBB participants abstracted by both readers, 75/80 (93.8%) matched abstracted KS status (k = 0.88; 95% CI: [0.77,0.98]). Estimated performance measures are reported in Table 1. Overall, we observed very high specificity of 0.992, but sensitivity was moderate at 0.456. These estimates suggest a true MCBB KS prevalence of ~15.6%. Similar performance was observed in the MyCode/DiscovEHR participants.

Conclusion

Our code-based KS electronic phenotyping algorithm demonstrated excellent specificity but moderate sensitivity. Additional sensitivity may be possible through inclusion of natural language processing, similar AI-based clinical note interpretation, and/or inclusion of patient questionnaire data.

Performance Measures
MeasureMCBB:
Estimate [95% CI]
MyCode/DiscovEHR:
Estimate [95% CI]
PPV0.913 [0.857, 0.949]0.923 [0.832, 0.967]
NPV0.905 [0.848, 0.843]0.827 [0.767, 0.873]
Sens0.456 [0.416, 0.496]0.348 [0.315, 0.381]
Spec0.992 [0.990, 0.993]0.991 [0.988, 0.993]

Phenotyping algorithm performance measures and 95% confidence intervals. PPV = positive predictive value, NPV = negative predictive value, CI = confidence interval, Sens = sensitivity, Spec = specificity

Funding

  • NIDDK Support