Deakin University
Browse

The Role of Explainable AI and Evaluation Frameworks for Safe and Effective Integration of Large Language Models in Healthcare

journal contribution
posted on 2024-09-20, 03:49 authored by Sandeep Reddy, A Lebrun, A Chee, D Kalogeropoulos
The integration of artificial intelligence (AI), specifically large language models (LLMs), into healthcare continues to accelerate, necessitating thoughtful evaluation and oversight to ensure safe, ethical, and effective deployment. This editorial summarizes key perspectives from a recent panel conversation among AI experts regarding central issues around implementing LLMs for clinical applications. Key topics covered include: the potential of explainable AI to facilitate transparency and trust; challenges in aligning AI with variable global healthcare protocols; the importance of evaluation via translational and governance frameworks tailored to healthcare contexts; scepticism around overly expansive uses of LLMs for conversational interfaces; and the need to judiciously validate LLMs, considering risk levels. The discussion highlights explainability, evaluation and careful deliberation with healthcare professionals as pivotal to realizing benefits while proactively addressing risks of larger AI adoption in medicine.

History

Journal

Telehealth and Medicine Today

Volume

9

ISSN

2471-6960

eISSN

2471-6960

Publication classification

C2.1 Other contribution to refereed journal

Issue

2

Publisher

Partners in Digital Health