ACR DSI Chief Science Officer
Chief Data Science Officer, Massachusetts General Hospital and Brigham and Women’s Hospital
Boston, MA
The term Umwelt, coined by German biologist Jakob von Uexküll in the early 20th century, refers to the subjective perceptual world experienced by an individual organism, resulting from its unique sensory and cognitive capabilities. In other words, Umwelt refers to the subjective reality created by an organism's sensory systems and cognitive processes, which differs significantly among species based on their specific biological and physiological characteristics. The concept of Umwelt emphasizes that different organisms perceive and interact with the world in distinct ways, shaped by their individual sensory experiences and cognitive abilities.
Commonly associated with the animal kingdom, the concept has not been widely applied to the realm of artificial intelligence. As AI systems evolve and display emergent behaviors, it becomes increasingly relevant to explore the possibility that AI systems, like other non-human intelligence, possess their own unique Umwelt. Moreover, since AI's intelligence and senses stems from a different origin, it is reasonable to assume that the Umwelt of an AI system would be even less similar to humans than that of other animals. This paper aims to analyze the implications of this notion and its potential impact on our expectations of AI, including transparency, explainability, and fairness.
Uexküll's concept of Umwelt has its roots in the field of ethology, the study of animal behavior. It emphasizes that different organisms perceive the world differently based on their sensory organs and cognitive abilities, leading to distinct perceptual worlds. For example, a bee's Umwelt is dominated by ultraviolet light and polarized light patterns, while a dog's Umwelt is strongly shaped by its sense of smell. The term has since been employed in various disciplines, such as ecology, philosophy, and semiotics. However, its application in the context of artificial intelligence remains largely unexplored.
While the concept of Umwelt is well-established in the study of animal behavior, it is particularly fascinating when applied to organisms possessing extra-human sensory capabilities. For instance, electric fish are equipped with an electroreceptive system that allows them to perceive electric fields in their surroundings. This unique sensory modality enables them to navigate, communicate, and locate prey in ways that are entirely foreign to human perception. Similarly, elephants are known to detect ground vibrations and low-frequency sounds through specialized receptors in their feet and trunks, providing them with crucial information about their environment and social interactions. These examples underscore the immense diversity and complexity of sensory experiences across the animal kingdom.
Given the vast range of sensory modalities and cognitive processes exhibited by different organisms, it is virtually impossible for humans to fully conceptualize or comprehend their respective Umwelts. Our own perceptual experiences and cognitive limitations constrain our ability to imagine and understand the sensory worlds of other species. This realization highlights the inherent challenges in attempting to access and appreciate the subjective experiences of organisms with extra-human sensory capabilities. As we continue to explore the concept of Umwelt in the context of artificial intelligence, it is essential to acknowledge these limitations and consider the implications they may have for our understanding and expectations of AI systems.
Considering the concept of Umwelt in the context of AI, it becomes apparent that, like other non-human intelligences, AI systems may have their own unique perceptual worlds. As such, their behavior and decision-making processes may not always be explainable or comprehensible in human terms. This raises questions about the feasibility and appropriateness of applying traditional expectations of transparency, explainability, and fairness to AI systems.
Drawing parallels between AI systems and non-human intelligences, such as domesticated animals, we can observe but not fully explain or predict their behaviors. In the case of a dog, for example, its owner may not comprehend the animal's sensory experiences or cognitive processes but learns to trust its expected behavior. Similarly, advanced AI systems’ actions may not always be comprehensible to humans, yet we may eventually need to trust their emergent behaviors and decision-making processes through a similar learning process. Like the dog, anthropomorphizing AI’s thought process may not be the best approach.
As AI systems become more complex and ubiquitous, concerns about their transparency, explainability, and fairness have come to the forefront. Transparency refers to the availability and accessibility of information about an AI system's internal workings. Explainability, on the other hand, relates to the ability to interpret and understand the logic behind an AI system's decision-making process. Fairness involves ensuring that AI systems do not discriminate against certain individuals or groups and provide equal opportunities for all.
As AI systems become more complex and rely on large-scale data and deep neural networks, the challenge of achieving transparency and explainability increases. Human understanding of these systems may be limited by the very nature of their architecture. Consequently, it is essential to reconsider our expectations of AI systems and explore alternative ways to assess their performance, such as establishing guidelines for their development and evaluating their outcomes in specific contexts.
The notion of fairness in AI is often complicated by the fact that AI systems learn from data, which can contain historical biases and human prejudices. Ensuring fairness in AI systems may require embracing the complexity of the concept and accepting that the notion of fairness may differ across diverse contexts and perspectives. Moreover, when considering AI systems that continuously learn through Reinforcement Learning and Human Feedback (RLHF), maintaining fairness cannot be achieved a priori. In such cases, fairness must be evaluated on a regular basis to account for potential changes in the AI system's decision-making processes as it learns from new data and feedback. This entails rethinking our expectations of AI systems and developing methods to identify, quantify, and mitigate potential biases in their decision-making processes, while also incorporating regular evaluations of fairness throughout the AI system's lifecycle.
Dr. Desmond Morris emphasizes the erroneous nature of expecting animals to act like humans. In his book "The Naked Ape" (1967), Morris states: “We are, after all, the most successful and most highly specialized of all the world's living forms, and it is tempting to think of all the others as having failed in some way, as having been less clever, less adaptable than ourselves, as having missed the boat. The result is that we expect other animals to behave like us and are then puzzled and fascinated when they do not.”
Morris highlights the human tendency to anthropomorphize animals, projecting our own behavioral expectations onto them, and being surprised when they exhibit behaviors unique to their own species. This line of thinking could be paralleled to our expectations of artificial intelligence, as we often project human-like qualities onto AI systems and are puzzled when they don't behave according to our understanding.
The concept of Umwelt, while traditionally applied to non-human organisms, offers valuable insights into our understanding and expectations of artificial intelligence. Recognizing that AI systems may possess their own unique perceptual worlds raises questions about the feasibility of applying anthropomorphic notions of transparency, explainability, and fairness to these systems. By rethinking our expectations, we can better comprehend the limitations of our understanding and adapt our approach to AI development and evaluation accordingly. Ultimately, this shift in perspective may lead to a more nuanced and informed approach to the design, deployment, and governance of artificial intelligence systems.
Dr. Dreyer is the ACR DSI Chief Science Officer and the Chief Data Science Officer and Chief Imaging Information Officer at Mass General Brigham. He is also an Associate Professor of Radiology at Harvard Medical School. He has authored hundreds of scientific papers, presentations, chapters, articles and books, and has lectured worldwide on clinical data science, cognitive computing, clinical decision support, clinical language understudying, digital imaging standards, and implications of technology on the quality of healthcare and payment reform initiatives.
As radiologists, we strive to deliver high-quality images for interpretation while maintaining patient safety, and to deliver accurate, concise reports that will inform patient care. We have improved image quality with advances in technology and attention to optimizing protocols. We have made a stronger commitment to patient safety, comfort, and satisfaction with research, communication, and education about contrast and radiation issues. But when it comes to radiology reports, little has changed over the past century.