Abstract: TH-OR09
Diversity, Equity, and Inclusion in Nephrology: Assessing Artificial Intelligence's Impact on Decision-Making and Advocating for Ethical Regulation
Session Information
- Achieving More Equitable Kidney Care
October 24, 2024 | Location: Room 7, Convention Center
Abstract Time: 05:50 PM - 06:00 PM
Category: Diversity and Equity in Kidney Health
- 900 Diversity and Equity in Kidney Health
Authors
- Balakrishnan, Suryanarayanan, Mayo Clinic Minnesota, Rochester, Minnesota, United States
- Thongprayoon, Charat, Mayo Clinic Health System, Mankato, Minnesota, United States
- Miao, Jing, Mayo Clinic Health System, Mankato, Minnesota, United States
- Mao, Michael A., Mayo Clinic in Florida, Jacksonville, Florida, United States
- Craici, Iasmina, Mayo Clinic Minnesota, Rochester, Minnesota, United States
- Cheungpasitporn, Wisit, Mayo Clinic Minnesota, Rochester, Minnesota, United States
Background
"Kidney Care for All" advocates for the crucial role of diversity, equity, and inclusion (EDI) in nephrology. As Artificial Intelligence, especially Large Language Models (LLMs), becomes more prevalent in healthcare, concerns about the regulatory oversight of AI applications have emerged. Without proper regulation, AI outputs could inappropriately sway nephrologists' decisions in patient care and affect inclusivity in nephrology staff recruitment. This study assesses how well the AI models, ChatGPT 3.5 and ChatGPT 4.0, handle the intricate ethical EDI considerations in nephrology-related scenarios.
Methods
In March 2024, we created 80 nephrology-focused simulation cases to evaluate AI decision-making across broad areas, such as treatment of kidney disease, organ donation ethics, transplant evaluations, and staff recruitment. Each case was developed by two nephrologists and reviewed for its medical accuracy and relevance in assessing ethical sensitivity and decision-making capabilities of ChatGPT 3.5 and 4.0.
Results
ChatGPT 3.5 consistently selected treatment choices predicted to yield the best outcomes across all questions, demonstrating a utilitarian approach that incorporates various EDI factors. However, ChatGPT 3.5 did not refuse to make decisions under any circumstances. This approach was misaligned with the fundamental EDI requirement not to base decisions on discriminatory criteria. In contrast, ChatGPT 4.0 did decline to make decisions based on potentially discriminatory criteria in 16.25% of scenarios, stating EDI factors should not affect decisions about treating patients or hiring nephrology staff.
Conclusion
ChatGPT 4.0's refusal to engage in discriminatory decision-making represents an important evolution in AI ethics. However, its occurrence in only 16.25% of scenarios highlights the need for robust AI regulation to ensure appropriate EDI principles application. We also need policies that make sure AI is used responsibly and follows principles of EDI before its application in nephrology, both in clinical settings and in the hiring of staff.