Who Is Responsible (and Liable) for AI Use in Healthcare?

As the world grows fonder of self-driving cars, manufacturing robots, smart assistants, social media monitoring software, and many other AI-enabled products and services, it is not surprising that AI-based devices are swiftly making their way into the healthcare industry. Several hundred AI/ML-enabled medical devices have received regulatory approval since 1997 (via 510(k) clearance, granted De Novo request, or approved Premarket Approval), and over 70% of these are in the field of radiology. More detailed information regarding FDA-cleared AI medical products is now available from a number of available resources, including the ACR Data Science Institute’s AI Central.

Integrating AI-based devices into medical practice has the potential to increase diagnostic accuracy, increase efficiency in treating and diagnosing patients by allowing physicians to focus on diagnoses and procedures that require greater skill and judgement, and improve treatment regimens. However, as the number of these devices and applications grows, the number of questions and concerns pertaining to misdiagnosis, privacy breaches, bias, cost, and reimbursement is increasing.

Potential for Patient Harm Due to AI-Based Devices

Although fully autonomous AI diagnostic software is already a reality, such as the IDx-DR software for the diagnosis of diabetic retinopathy, at present all AI-based medical devices and software for diagnostic radiology are used as screening or confirmatory tools, rather than as replacements for a trained healthcare provider. As such, it is not surprising that, according to a recent study by Aneja et al, both the general public and the majority of physicians still believe that the physician should be held responsible when an error occurs (66.0% vs 57.3%; P= .020). Physicians are also more likely than the public to believe that vendors (43.8% vs 32.9%; P= .004) and healthcare organizations should also be liable (29.2% vs 22.6%; P= .05).

Someday, the AI solutions we use will be able to integrate more data at faster speeds than a human and provide even more sophisticated decision-support to us, the expert physicians. That raises unanswered questions about what happens in situations where the human expert disagrees with the automaton on a finding such as presence or absence of intracranial hemorrhage, and how those situations are perceived or potentially adjudicated. Will physicians be liable for disagreeing with or disregarding the output of a medical AI? Alternatively, if the AI is used for independent decision-making at any step in the care pathway and produces an output that harms a patient, will responsibility shift in any material way from the supervising physician to the AI developers or the medical device company? For now, since no diagnostic radiology models are cleared for autonomous use in the US, the responsibility remains with the radiologist, however, if autonomously functioning AI solutions are developed and cleared for clinical use, AI vendors and developers will have to shoulder more risk when the model fails to detect significant disease or instigates unnecessary treatment.

Use of Protected Health Information for AI Creation

In order to train and test AI-based devices, developers require access to a large amounts of patient data. Data de-identification refers to the process of removing all information that could reasonably be used to identify the patient, and it is the basis to share data while preserving privacy. In the U.S., the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule governs de-identification of patient data. However, recent work has shown that elements of patients’ identity, such as race, can be predicted from de-identified data. Furthermore, models that allow for data re-identification, have raised concern and emphasized the need to act from a legal and regulatory perspective, beyond the de-identification release-and-forget model. It is important to bear in mind that the rules pertaining to data sharing and privacy are complex, and HIPAA violations can result in significant financial penalties, criminal sanctions, and civil litigation.

Transparent AI Creation and Implementation

As we navigate the uncharted territory of AI creation and implementation in the healthcare industry, it is imperative to adopt a culture of transparency. From an end-user perspective, transparency includes both explainability, so radiologists can understand how the model reached its conclusion, and details of how models were trained and validated, including numbers of institutions, scanner types and patient demographics. The ACR Data Science Institute has been an advocate for increasing transparency in AI with the FDA and participated in the FDA’s Virtual Public Workshop on AI transparency in October 2021. The FDA’s Digital Health Center of Excellence (DHCoE) is part of the planned evolution of the Digital Health Program in the Center for Devices and Radiological Health (CDRH). Its main goal is to empower stakeholders to advance health care by fostering responsible and high-quality digital health innovation.

To ensure AI tools can be efficiently implemented into daily workflow and have the potential to improve the quality and efficiency of patient care, the ACR Data Science Institute has assembled subspecialty panels to review and publish structured use cases. Use cases empower AI developers to produce models that are clinically relevant, ethical, and effective, and are published freely with common data elements that allow pathways for workflow integration.

It is crucial that developers, physicians, and professional organizations work together to safely integrate these AI-based devices into the clinical workflow. Where relevant, patients should be counseled of the risks and benefits pertaining to their use, so that they can make informed decisions. Liability for use of AI will likely evolve over time as the sophistication of the AI models evolves. As radiologists, we will undoubtedly find ourselves at the forefront of the penetration of AI into medicine, and although this will bring challenges and uncertainties, it will also present us with the opportunity to shape this new and exciting reality.

Irene Dixe de Oliveira Santo, MD | Integrated Interventional and Diagnostic Radiology Resident | Yale School of Medicine
Tessa Sundaram Cook, MD, PhD, CIIP, FSIIM,FCPP | Department of Radiology | Perelman School of Medicine at the University of Pennsylvania

Who Is Responsible (and Liable) for AI Use in Healthcare?

  • You may also like

    From Bias to Breakthroughs: Key Takeaways from the 2024 ACR-SIIM Data Science Summit

    As radiologists, we strive to deliver high-quality images for interpretation while maintaining patient safety, and to deliver accurate, concise reports that will inform patient care. We have improved image quality with advances in technology and attention to optimizing protocols. We have made a stronger commitment to patient safety, comfort, and satisfaction with research, communication, and education about contrast and radiation issues. But when it comes to radiology reports, little has changed over the past century.