Abstract: SA-PO1075
Artificial Intelligence Can Classify Human Kidney Biopsy Images
Session Information
- Pathology and Lab Medicine: Clinical
October 27, 2018 | Location: Exhibit Hall, San Diego Convention Center
Abstract Time: 10:00 AM - 12:00 PM
Category: Pathology and Lab Medicine
- 1502 Pathology and Lab Medicine: Clinical
Authors
- Matsumoto, Ayumi, Osaka University Graduate School of Medicine, Suita, Osaka-Fu, Japan
- Matsui, Isao, Osaka University Graduate School of Medicine, Suita, Osaka-Fu, Japan
- Shimada, Karin, Osaka University Graduate School of Medicine, Suita, Osaka-Fu, Japan
- Hashimoto, Nobuhiro, Osaka University Graduate School of Medicine, Suita, Osaka-Fu, Japan
- Doi, Yohei, Osaka University Graduate School of Medicine, Suita, Osaka-Fu, Japan
- Yamaguchi, Satoshi, Osaka University Graduate School of Medicine, Suita, Osaka-Fu, Japan
- Kubota, Keiichi, Osaka University Graduate School of Medicine, Suita, Osaka-Fu, Japan
- Oka, Tatsufumi, Osaka University Graduate School of Medicine, Suita, Osaka-Fu, Japan
- Sakaguchi, Yusuke, Osaka University Graduate School of Medicine, Suita, Osaka-Fu, Japan
- Hamano, Takayuki, Osaka University Graduate School of Medicine, Suita, Osaka-Fu, Japan
- Isaka, Yoshitaka, Osaka University Graduate School of Medicine, Suita, Osaka-Fu, Japan
Background
Diagnosis based on kidney biopsy is a complicated decision process that includes some elements of uncertainty. Therefore, appropriate diagnoses require trained pathologists. If artificial intelligence could classify kidney biopsy images, it would be beneficial for appropriate and objective diagnosis for kidney diseases.
Methods
We obtained micrographs of PAS, PAM, or Elastica-Masson (EM)-stained human kidney biopsy samples using virtual slide system. Squared 13,017 images were manually cut out from the micrographs and then labelled into 87 (29 disease x 3 staining) categories based on diagnoses made by at least two nephrologists. We also obtained 7,177 squared tubulointerstitial images form PAS stained sections. Validation datasets were generated from the dataset by random selection. GoogLeNet, a convolutional neural network, was used to classify these images.
Results
GoogLeNet was well trained to classify glomerular images. The accuracy and loss of validation dataset were 0.7948 and 1.1301, respectively. To confirm that GoogLeNet really learned glomerular images, we prepared augmented dataset of 52,068 glomeruli which was prepared from original 13,017 squared glomerular images by 90°, 180°, and 270° rotations. We labelled these images according to disease categories and rotations. Learning of this negative control dataset yielded only 0.36866 of accuracy and 1.47506 of loss. We also examined whether GoogLeNet can classify glomerular diseases from tubulointerstitial images. Learning of PAS-stained tubular images yielded 0.9475 of accuracy and 0.2495 of loss.
Conclusion
GoogLeNet can classify glomerular diseases not only by glomerular images but also by tubulointerstitial images.
Light blue, deep blue, yellow, and red lines indicate loss of training data, loss of validation data, accuracy of training data, and accuracy of validation data, respectively.