AI in Brief: The AI Lifecycle

Categories

AI in Brief

For years, the conversation surrounding imaging AI had been focused on the technology would solve the countless problems in radiology. While these conversations are important and continue to be relevant, the success of AI in radiology involves a much broader discussion. The AI Lifecycle refers to the entire process that an AI goes through, from conceptualization and data engineering to modeling and validation and eventually to creating benefits for patients and possible reimbursement for the caregiver.

In this quarter’s AI in Brief, we review five advancements at various parts of the lifecycle and how these broader considerations raise important questions about AI.

● First, we look at Dr. Xueyan Mei and their team’s work building a large, curated radiology neural network weights.

● In addition, we explore how a UT Southwestern team led by Ali Tejani, MD, proved how natural language processing could successfully turn radiology reports into a minable database void of sensitive information.

● The work of Dina Katabi and the team from Massachusetts Institute of Technology, is an example of how a data and deep learning model is very different from an AI product.

● As AI makes it way to clinical deployment, it makes tradeoffs: every ounce of benefit comes with a cost. A recent Radiology:AI article by Drs. Colin Rowell and Ronnie Sebro explore how these considerations affect the finances of AI.

● Finally, we close with review a recent Radiology article by Daye et al on governance, bringing together stakeholders from each part of the AI lifecycle to oversee AI deployment decisions.

A Common Starting Point

https://pubs.rsna.org/doi/10.1148/ryai.210315

This paper published in Radiology:AI by a team led by Xueyan Mei, PhD, and Yang Yang, PhD, at Mount Sinai Hospital in New York City, describes a new resource that could give future radiology AI applications a large boost in accuracy.

RadImageNet is a set of specialized weights, or “starter instructions,” for radiology AI systems to understand the world around them. AI systems generally learn to see diseases from large amounts of data, with a set of weights as a starting point. ImageNet, created by Google, is a commonly used set of weights that teaches AI systems how to detect basic shapes. However, this set of weights is not tailored to medical images.

RadImageNet is a specialized set of starter weights created from radiology images. As such, AI systems created with RadImageNet as a starting point were found to have better performance in radiology AI.

The weights and code are available at https://github.com/BMEII-AI/RadImageNet.

AI to Preprocess Radiology Reports

https://pubs.rsna.org/doi/10.1148/ryai.220007

A team of researchers led by Ali Tejani, MD, from UT Southwestern tested multiple NLP (Natural Language Processing) systems on chest x-ray text reports. These systems were trained to look for descriptions of various lines and tubes.

The systems performed near perfectly, with AUC values between 0.936 and 0.996. The team discovered that newer technologies powering NLP systems, such as DeBERTa, DistilBERT, and RoBERTa, performed much better than older models (BERT).

While there are many collections of medical images available to the public, there is a relative lack of text-based radiology reports available. This is largely due to privacy concerns regarding protected health information (PHI). While radiology images are structured, via the DICOM format, in such a way to easily remove PHI, radiology reports are unstructured. As such, there is no guaranteed way to remove PHI from reports other than manual annotation.

The near-perfect performance of these NLP systems for the lines and tubes task suggests that computers will soon be able to independently remove PHI from radiology reports and clinical documents. This may fuel the creation of more open source radiology datasets for researchers.

Breathing patterns + Parkinsons

https://news.mit.edu/2022/artificial-intelligence-can-detect-parkinsons-from-breathing-patterns-0822

This paper, published in Nature Medicine, describes an AI model to detect and track Parkinson’s Disease (PD) from nocturnal sleeping patterns. The model was developed by a team led by researchers from MIT. Patients were monitored while sleeping via a special camera or a wearable device which tracked breathing patterns. The AI model could detect the presence of PD with an AUC of 0.90 and 0.85 on held-out and external test sets, respectively. The AI model could also estimate severity and progression of Parkinson’s Disease, in accordance with a commonly used questionnaire scale.

This work is significant because there are currently no effective biomarkers for diagnosing Parkinson’s Disease, or for tracking its progression. As such, this paper demonstrates the possibility of objective, noninvasive, at-home assessment of PD. In addition, the model demonstrates that severity scores can be extracted from breathing patterns, which may be useful for risk assessment before clinical diagnosis. A relationship between abnormal breathing patterns and Parkinson’s has been suspected for centuries. It was described in the works of James Parkinson, the English doctor who first wrote about the disease in 1817.

Who pays for AI?

https://pubs.rsna.org/doi/10.1148/ryai.220054

This editorial, published in Radiology:AI, discusses various the complex ownership of data used in artificial intelligence systems in medicine. This article begins with a discussion of the lifecycle of AI systems, from idea conception to long-term model management. Afterwards, the article describes the types of data that AI uses, and relevant financial implications. The article also discusses possible mismatches between the data flow and cash flow for reimbursement of AI.

In particular, this paper calls to attention the numerous stakeholders involved in the creation of data required for AI models. Patients, healthcare systems, insurance companies, and healthcare providers can all be thought of as contributors to AI models. As such, more discussion from the AI community is necessary to untangle the complex web of data ownership. This is especially urgent as the number of FDA approved AI models grows exponentially, year after year.

How to Get AI Governance Right?

https://pubs.rsna.org/doi/10.1148/radiol.212151

This paper, published in Radiology, describes the processes necessary for various people to evaluate the purchase and implementation of Radiology AI systems.

Specifically, this paper introduces a framework for evaluating products based on six categories: Ease of Use/Performance, Technical Readiness, Value, Clinical Impact, Fairness/Bias/Harm, and Scientific Evidence.

Also discussed is the often neglected topic of post-deployment maintenance and monitoring. The authors discuss how even if AI systems and products have FDA approval, significant amounts of resources are required after a product is deployed. Steadfast monitoring is necessary to ensure that the systems are working as intended.

Jason Adleberg, MD | Radiology Resident at Mount Sinai West 

Po-Hao "Howard" Chen, MD, MBA | Chief Imaging Informatics Officer, IT Medical Director for Enterprise Radiology, and Staff Radiologist in Musculoskeletal Imaging | Cleveland Clinic

 

 

 


AI in Brief: The AI Lifecycle

  • You may also like

    AI in Brief: Leveraging Latent Data

    As radiologists, we strive to deliver high-quality images for interpretation while maintaining patient safety, and to deliver accurate, concise reports that will inform patient care. We have improved image quality with advances in technology and attention to optimizing protocols. We have made a stronger commitment to patient safety, comfort, and satisfaction with research, communication, and education about contrast and radiation issues. But when it comes to radiology reports, little has changed over the past century.