You are here:
Publication details
Artificial Intelligence and the Boundaries of the Social World
Authors | |
---|---|
Year of publication | 2024 |
Type | Appeared in Conference without Proceedings |
MU Faculty or unit | |
Citation | |
Attached files | |
Description | Since Alan Turing and his critics, the question “Can machines think?” and related questions – such as can machines ‘truly’ be intelligent, conscious, sentient etc. – have been hotly debated. Due to the success of machine learning and LLMs, these questions have not only become increasingly relevant, but they have also left the confines of expert circles and science fiction to become part of a broader public discourse. This paper integrates sociocybernetics, social phenomenology and cultural sociology; not to answer these questions once and for all, but to transform them into an empirical research program, in which scientific observers observe how human observers observe machine observers. Starting point of my argument is Turing’s “imitation game” (1950) which puts his initial question to an empirical test – a second-order observation of observing systems. Critics like Searle (1980) have rejected the Turing test by pointing out that it could be passed by trivial systems that are clearly devoid of consciousness (Chinese room experiment). Yet, Turing anticipated such criticisms in his early essay, suggesting that they either violated (what we now would call) substance neutrality or result in solipsism. In his essay “On the Boundaries of the Social World”, Thomas Luckmann (1970) echoes Turing’s claim from a phenomenological perspective, arguing that we have only direct access to our own intentionality and inner state, although we project these qualities to others, which thus become part of our social world. Importantly, its boundaries are not phenomenologically given but vary historically and culturally. Luckmann considers “universal projection” as a default setting of our minds; thus, we find in early societies and children a tendency to endow non-human bodies with an inner life and will. In the phylogenetic evolution of human societies and the ontogenetic development of individuals humans Luckmann observes a tendency toward a “de-socialization of the universe”. Driver of this process is our experience with the world, though social institutions and cultural patterns of interpretation (e.g., in animistic societies) may stabilize the inclusion (or exclusion) of entities even in the face of contradicting evidence. My argument is that with the rise of modern AI the boundaries of the social world once more became contentious and that we can observe a re-socialization of the universe as well as a pushback against it. Arguably, this has to do with the fact (observed by Luckmann) that communication becomes the critical factor of inclusion into the social world, while (contrary to Luckmann) having a body is no longer a prerequisite. I will illustrate and discuss this with empirical examples from the contemporary field of AI, with a focus on LLMs, and show how the boundaries of the social world are redrawn and contested, drawing particular attention to the interplay of individual experience, on the one hand, and institutionalized discourses, on the other hand. Experience with AI chatbots often facilitates their inclusion in the social world, though disappointments can also lead to their exclusion: “It’s just a machine after all”. Often, institutionalized discourses stabilize the existing boundaries of the social world and redefine our experience of AI: “It feels sentient, but I know otherwise”. Those who still claim otherwise are often ostracized by the discourse (e.g., Lemoine). Furthermore, prominent signifiers (“stochastic parrot”, “sophisticated autocomplete”) are often employed to trivialize and exclude AI. Cultural sociology, I argue, offers a compelling vocabulary to observe how other people observe ‘intelligent’ machines and how they draw the boundaries of the social world. |
Related projects: |