ACR Data Science Institute (DSI) Chief Science Officer Keith Dreyer, DO, PhD, FACR, was featured in a Scientific American article about bias in artificial intelligence (AI) and how the humans that use it may unconsciously absorb these biases.
AI models can easily become more skewed than humans are, the news website said, citing a recent Bloomberg assessment that determined generative AI may display stronger racial and gender biases than people do. Humans may also attribute more objectivity to machine-learning tools than they do with other sources.
Another huge issue is that lack of transparency from AI developers, such as how their algorithms are built and trained, makes it difficult to weed out AI bias. Dreyer told Scientific American that transparency is a problem, even among approved medical AI tools.
The ACR has been advocating for increased transparency for years, writing in a 2021 article, “We need physicians to understand at a high level how these tools work, how they were developed, the characteristics of the training data, how they perform, how they should be used, when they should not be used, and the limitations of the tool.”
Read the full article on Scientific American.