AI in Brief: Leveraging Latent Data

Categories

AI in Brief

There is more to radiology data than meets the eye. As radiology AI begins entering the mainstream, it is now more important than ever to turn attention toward the latent information contained in imaging examinations. Latent data is information that is not directly observed but can be inferred through a mathematical model.

In this quarter’s AI in Brief, we review five examples of AI in action where the value of extracting and examining how latent data is raising thought-provoking questions about and driving innovation via AI in radiology.

  • First, we look at Judy Wawira Gichoya, MD, MS, Assistant Professor in the Department of Radiology and Imaging Sciences at Emory University School of Medicine, and her team’s work exploring racial information hidden within imaging pixels that belie human experts.
  • In addition, we explore how a University of California San Francisco (UCSF) team led by Jae Ho Sohn, MD, MS, UCSF Radiology, proved the feasibility of using chest radiography to infer eventual medical expenses.
  • These considerations matter because AI creates an automation bias. So we also review the recent FDA recommendations on automation bias — starting with large-vessel occlusion tools and extending beyond.
  • This emerging attention on latent data highlights the importance of the patient’s perspective and external model validation. In JAMA Network Open, a team led by Dhruv Khuller, MD, MPP, Division of Health Policy and Economics, Department of Population Health Sciences, Weill Cornell Medical College, shared the patient’s perspective, where the highest concerns included privacy, misdiagnosis, and inexplicability of AI models.
  • And we close with the work from Alice C. Yu, MD, and her team, from the Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, which raises the counterintuitive point that larger datasets are not always better. Their premise is that validation with new and external data could be more important than big data in ensuring the AI model’s general applicability across different populations and settings.

 

1. "AI Recognition of Patient Race in Medical Imaging: A Modeling Study" A landmark study published in The Lancet Digital Health1 by a team led by Gichoya underscored the ability of AI models to learn racial identities from medical imaging, even in situations where radiologists cannot. The group’s manuscript details the high performance of a deep learning model tasked with predicting racial identity across imaging modalities and anatomic locations. The recent publication raises concern for inadvertent or intentional use of AI models to exacerbate healthcare disparities that disproportionately affect certain racial and socioeconomic demographics.

Further analysis examined several potential “proxies” for patient race to understand their algorithm’s high performance, including physical characteristics (i.e., body habitus), textural versus structural imaging features, image quality, and disease distribution. However, the rigorous assessment did not reveal convincing evidence of bias deriving from these factors.

Ultimately, the study results assert the need to include self-reported race among other demographics within every dataset and to institute performance audits to ascertain the influence of demographic factors on deployed model output. Though this study does not provide a definitive explanation for factors inherent to AI models that enable the prediction of self-reported race, the findings serve as a reminder for a deliberate, mindful approach before clearing potentially biased models for clinical care.

2. "Prediction of Future Healthcare Expenses of Patients from Chest Radiographs Using Deep Learning: A Pilot Study"

In another example of extracting insight beyond information readily available to the human eye, researchers at UCSF’s Center of Intelligent Imaging developed deep learning models that predict future healthcare costs from chest X-rays (CXR). Led by Sohn, the group’s manuscript in Nature Scientific Reports2 describes their models’ performance in classifying the top 50% of healthcare spenders at one, three, and five years based on information derived from over 30,000 emergency room patients’ CXRs and healthcare spending data.

The study’s findings are an ode to the immense amount of data hidden in medical imaging and the ability of deep learning to leverage this information for clinical insight. However, the potential to determine healthcare spending from medical imaging is a double-edged sword: Predictions could aid initiatives geared at reducing healthcare spending or mitigating inadequate medical management, but these algorithms could alternatively be used by insurance agencies to selectively deny coverage by identifying high-risk individuals.

While generalizability of the models is limited due to factors such as data limited to a single hospital system and missing data, the study reinforces a need to maintain awareness of AI solutions’ profound ability to leverage rich data within medical imaging and to ensure responsible, ethical use of these tools.

3. "Perspectives of Patients About Artificial Intelligence in Healthcare" (JAMA)

Conversations about AI solutions invoke discussion about stakeholders — including physicians, administrators, data scientists, and medicolegal staff — but what about patients? Recent national survey data published in JAMA Network Open3 indicates that patients are optimistic about the role of AI in healthcare and want to know if AI is being used for diagnosis or treatment decisions.

Specifically, a majority of the 926 respondents believed that AI will make healthcare “somewhat better” and indicated that it is “very important” that patients are informed if AI plays a big role in their management. However, further data suggests that a given model’s task influences patient comfort with AI, as a minority of respondents were “very” or “somewhat” comfortable with AI making cancer diagnoses.

Regarding their primary reservations about AI, respondents noted concerns about misdiagnosis, compromised privacy, and decreased time with physicians, and increased costs, with greater concern from racial and ethnic minority groups.

The study results signal maturing patient opinions about AI in healthcare, necessitating patient education regarding how AI is used in healthcare and open communication regarding patient opinions about AI. Understanding patient perspectives about AI will allow physicians to have a more mindful approach when incorporating solutions that could potentially introduce bias or perpetuate healthcare disparities. A patient-centric approach will ultimately help maintain trust in physicians’ use of a growing array of AI solutions.

4. Who Owns LVO AI?

The FDA's announcement4 that AI-based large vessel obstruction (LVO) detection tools still need radiologist interpretation prompts discussions on automation bias and the role of imaging AI in care coordination/among multidisciplinary care teams.

The last few months featured active discourse on the topic of autonomous AI, centered around the FDA announcement reminding clinicians that AI-based LVO tools (CADt) do not replace radiologist interpretation. The FDA reiterated the role of LVO CADt devices to improve workflow by flagging and prioritizing suspected cases, emphasizing the importance of radiologist review to avoid acting on false negative or positive exams.

The announcement addresses concern from real-world data for decreased healthcare provider awareness of the impact of regarding LVO CADt devices as autonomous tools in clinical workflow, which could contribute to misdiagnosis and, ultimately, harm to patients. Accordingly, the FDA plans to work with device vendors to ensure open communication with healthcare providers about the intended use and appropriate role of AI-based devices in clinical workflow.

The FDA reminder was issued within weeks of an announcement regarding the provision of a CE mark to AI-application developer Oxipit5 for their “ChestLink” autonomous AI suite. Oxipit CEO Gediminas Peksys claimed that, “ChestLink ushers in the era of AI autonomy in healthcare,” citing high-performance metrics and low error rates in post-deployment contexts at institutions from the pilot stage. These recent events underscore the heterogeneous approach by regulatory bodies and the need for consistent, multidisciplinary conversations regarding potential roles for autonomous AI versus the AI-physician team for patient care.

5. "External Validation of Deep Learning Algorithms for Radiologic Diagnosis: A Systematic Review"

Limited generalizability remains a concern with skepticism regarding the ability of a model to maintain performance outside the controlled context of the original study. External validation is the process of using data from new locations, scanners, or patients to assess algorithm performance. External validation provides an opportunity to evaluate model performance in real-world contexts.

Led by Yu, researchers from Johns Hopkins University performed a systematic review6 to better understand the state of external validation of radiology deep learning. Aligning with prior reviews, this study found that only a small number of articles on deep learning algorithms included external validation. The majority of articles in this cohort demonstrated “at least some” diminished performance on external datasets, with nearly half of the studies reporting “modest” diminution.

Notably, training on large datasets did not demonstrate significant influence on external performance, counterintuitive to the notion that training on larger datasets may allow generalizability deriving from a wider range of available features.

While understanding the underlying causes of relatively poor performance on external datasets requires further analysis, the findings of this review serve as a call-to-action to encourage external validation as a necessary step in developing AI models, with reporting detailed demographic data as a minimal requirement.

Watch for our next AI in Brief in the fall. We’ll be providing another update on noteworthy research and articles on AI. To receive these AI news updates automatically, subscribe to the ACR DSI Blog.

 

Ali Tejani, MD | Postgraduate Resident in Diagnostic Radiology | University of Texas Southwestern Medical Center

Po-Hao “Howard” Chen, MD, MBA | Chief Imaging Informatics Officer, IT Medical Director for Enterprise Radiology, and Staff Radiologist in Musculoskeletal Imaging | Cleveland Clinic

End Notes

  1. https://www.thelancet.com/journals/landig/article/PIIS2589-7500(22)00063-2/fulltext
  2. https://www.nature.com/articles/s41598-022-12551-4
  3. https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2791851
  4. https://www.fda.gov/medical-devices/letters-health-care-providers/intended-use-imaging-software-intracranial-large-vessel-occlusion-letter-health-care-providers
  5. https://oxipit.ai/news/first-autonomous-ai-medical-imaging-application/
  6. https://pubs.rsna.org/doi/10.1148/ryai.210064

AI in Brief: Leveraging Latent Data

  • You may also like

    AI in Brief: Swimming to the Deep End with Large Language Models

    As radiologists, we strive to deliver high-quality images for interpretation while maintaining patient safety, and to deliver accurate, concise reports that will inform patient care. We have improved image quality with advances in technology and attention to optimizing protocols. We have made a stronger commitment to patient safety, comfort, and satisfaction with research, communication, and education about contrast and radiation issues. But when it comes to radiology reports, little has changed over the past century.