Seeing is Not Always Believing: The Case for Causal AI in Radiology

Artificial intelligence (AI) has made tremendous strides in medical imaging analysis over the past ten years. From detecting skin cancer to segmenting brain tumors, AI models can now detect skin cancer and identify brain tumors with accuracy that often matches or surpasses human experts. However, as these systems move closer to clinical deployment, researchers are asking important questions about their reliability and potential problems. 

A recent paper by Castro and colleagues1 argues that we should include causal reasoning into medical imaging AI. They say that while current AI methods are great at finding patterns in data, they do not understand the reasons behind those patterns. This can lead to models that work well in research but fail in real-world settings. 

For example, imagine an AI system trained to detect pneumonia from chest radiographs. If the training data comes primarily from hospitalized patients, the model might learn to associate certain hospital equipment in the images (like chest tubes) with pneumonia. This might work in the training data but not in real-life outpatient settings, where such equipment is rarely present. 

Ignoring causal relationships can lead to flawed and potentially dangerous AI systems. The paper highlights several key challenges in medical imaging AI through a causal lens: 

Data scarcity – It is difficult, and expensive, to obtain large, high-quality medical imaging datasets. Techniques like semi-supervised learning use unlabeled data, but the authors argue this does not work well from a causal perspective. 

Dataset shift – Differences between training and deployment environments (like different patient demographics or imaging protocols) can severely degrade AI performance. Understanding the causal factors behind these shifts is crucial for developing robust models. 

Sample selection bias – How patients are chosen for training datasets can introduce subtle biases that impact model behavior. For instance, an AI trained only on images flagged as abnormal by radiologists might not work well on routine screening exams. 

The paper advocates for researchers to explicitly model the causal relationships in their data using techniques such as causal diagrams. These diagrams can identify hidden factors, selection biases, and other issues that may not be obvious from the data alone. They also provide a framework to think about how models will perform in different real-world conditions. 

Some researchers have already begun exploring causal approaches for medical imaging AI. For example, Desai and colleagues2 developed a causal model for detecting hemorrhage in head CT scans that explicitly accounts for variables such as patient age and anticoagulant use. Their model showed improved generalization compared to traditional deep learning approaches.

While incorporating causality into AI systems is challenging, it is an important frontier for improving the safety and reliability of these tools. As Castro and colleagues conclude, "causal reasoning opens the door for researchers to go beyond association by allowing them to incorporate domain expertise when answering fundamental scientific questions. As AI continues to advance in radiology and other medical fields, it is crucial to stay aware of its strengths and weaknesses. Causal AI offers a promising path forward, but will require collaboration between computer scientists, statisticians, and clinical experts to reach its full potential. Only by deeply understanding the causal mechanisms underlying both diseases and the imaging process itself can we develop truly robust and trustworthy AI systems for healthcare. 

For Additional Information on casual reasoning in AI. 

  1. Pearl, J. Theoretical Impediments to Machine Learning with Seven Sparks from the Causal Revolution. arXiv:1801.04016 (2018).
  2. Hernán MA, Hsu J, Healy B. A Second Chance to Get Causal Inference Right: A Classification of Data Science Tasks. CHANCE. 32(1):42-49 (2019)

Yashbir Singh, ME, PhD | Assistant Professor, Mayo Clinic | Rochester, Minnesota

Seeing is Not Always Believing: The Case for Causal AI in Radiology

  • You may also like

    Shared Goals, Improved Outcomes: Key Takeaways from the 2024 QSI Conference

    As radiologists, we strive to deliver high-quality images for interpretation while maintaining patient safety, and to deliver accurate, concise reports that will inform patient care. We have improved image quality with advances in technology and attention to optimizing protocols. We have made a stronger commitment to patient safety, comfort, and satisfaction with research, communication, and education about contrast and radiation issues. But when it comes to radiology reports, little has changed over the past century.