Abstract: FR-PO072
Enhancing Patient Education: ChatGPT's Potential in Addressing Dialysis FAQs
Session Information
- Educational Research
November 03, 2023 | Location: Exhibit Hall, Pennsylvania Convention Center
Abstract Time: 10:00 AM - 12:00 PM
Category: Educational Research
- 1000 Educational Research
Authors
- Davis, Paul W., Mayo Clinic Minnesota, Rochester, Minnesota, United States
- Craici, Iasmina, Mayo Clinic Minnesota, Rochester, Minnesota, United States
- Krisanapan, Pajaree, Mayo Clinic Minnesota, Rochester, Minnesota, United States
- Tangpanithandee, Supawit, Mayo Clinic Minnesota, Rochester, Minnesota, United States
- Thongprayoon, Charat, Mayo Clinic Minnesota, Rochester, Minnesota, United States
- Cheungpasitporn, Wisit, Mayo Clinic Minnesota, Rochester, Minnesota, United States
Background
Patient education empowers those with ESKD to understand and navigate their treatment options. FAQs websites are valuable resources for information on dialysis, but AI's effectiveness in addressing patient queries about dialysis remains unexplored. ChatGPT, an AI model powered by natural language processing, has shown promise in providing accurate information across varied domains. This study evaluates ChatGPT's performance in delivering accurate patient education on dialysis.
Methods
A total of 57 patient questions related to dialysis were collected from the official Mayo Clinic website for the purpose of patient education. These questions were categorized into five steps: original questions, paraphrased questions with different interrogative adverbs, paraphrased questions with incomplete sentences, paraphrased questions with misspelled words, and paraphrased questions with verbs and prepositions removed. ChatGPT (March 23 Version) generated responses to each question; the accuracy of its answers was evaluated by nephrologists and compared with the FAQs website.
Results
ChatGPT consistently demonstrated high accuracy by providing correct responses to all 57 questions across various complexity levels and paraphrasing variations. ChatGPT's answers to patient FAQs for 1) original questions, 2) paraphrased questions with different interrogative adverbs, 3) paraphrased questions with incomplete sentences, 4) paraphrased questions with misspelled words, and 5) paraphrased questions with verbs and prepositions removed were all consistently accurate. However, there was one inconsistent response to "what are the requirements for a patient undergoing hemodialysis?"; ChatGPT initially provided the expected requirements related to patient behavior but later included technical requirements for patient dialysis candidacy.
Conclusion
These results highlight the ChatGPT's accuracy in providing education on dialysis across different complexity levels and variations in question paraphrasing, indicating that it has the potential to serve as a valuable resource for patient information on dialysis, supplementing FAQs websites. However, one inconsistency calls for refinement to ensure accurate information. With improvements, AI models potentially can significantly contribute to patient education, further empowering those with ESKD.