A multi-society paper was published today in JACR highlighting the increasing need to monitor the use and safety of artificial intelligence (AI) algorithms after they have been integrated into clinical practice. Authors from the American College of Radiology, the Canadian Association of Radiologists, the European Society of Radiologists, the Royal Australian and New Zealand College of Radiologists, and the Radiological Society of North America, argued that AI can fulfil its promise to advance patient well-being if all steps — from development to integration in healthcare — are rigorously evaluated.
While integrating AI into radiology has the potential to revolutionize the field, as AI’s abilities expand, it has become increasingly important to focus on continuous monitoring of its utility and safety. The authors, including ACR DSI Chief Medical Officer Bib Allen, MD, FACR, and ACR Informatics Commission Chair Christoph Wald, M.D., Ph.D., MBA, FACR, said monitoring the performance of AI models in clinical practice is needed so that any performance degradation can be quickly identified so that appropriate measures can be taken to ensure patient safety. At a minimum, they suggest yearly re-evaluation of models’ performance should be conducted, paying close attention to parameters known to be associated with drivers of input data drift. But more frequent evaluation is even better.
“Continuous AI monitoring that captures model performance, examination parameters and patient demographics in data registries offers significant advantages, including being able to identify the causes of diminished performance in real time and the ability to provide developers with aggregated data for model improvement,” Dr. Allen emphasized.
The authors also suggest that cooperation between imaging AI developers, clinicians and regulators is the best way to allow everyone involved to address ethical issues and to monitor AI performance.
“AI in radiology should ultimately increase patient well-being, minimize harm, respect human rights and ensure that the benefits and harms are distributed among stakeholders in an equitable way. Since AI heavily relies on data, ethical issues relating to the acquisition, use, storage and disposal of data are central to patient safety and the appropriate use of AI” Dr. Wald stressed.
Addressing these ethical issues with AI in radiology will require a combination of technical solutions, government activity, regulatory oversight, and ethical guidelines developed by a wide range of stakeholders, including clinicians, patients, AI developers, and ethicists.
Read more in JACR.