International Symposium On Leveraging Applications of Formal Methods, Verification and Validation • Crete, Greece
Time: Saturday, 2.11
Room: Room 1
Authors: Hans-Johann Glock
Abstract: My input tries to shed light on the !uestion of AI Minds: Can AI systems have minds comparable to those of humans? by comparing and contrasting it with the more longstanding Question of Animal Minds: Can animals have minds comparable to those of humans? In a first step I narrow both questions down to the issue of intentionality. Can animals and AI systems, respectively, represent the world as being a certain way. In a second step I shall pinpoint an important difference between the two cases. Regarding animals, the most serious challenge is to determine the content of their alleged intentional states in the absence of linguistic capacities. At least in the case of some AI systems, namely LLMs, if they are in intentional states, the content is easily established through capacity to interact with humans linguistically. In fact, there has been a general tendency to underestimate LLMs as language models, while overestimating as bona fide Artificial General Intelligence. The problem in their case is that their relation to the world is insufficient to constitute bona fide intentionality. This problem is particularly acute if, as I shall argue in a third step, the advanced cognitive capacity to represent the world as being a certain way is intrinsically related to the conative capacity of wanting it to be a certain way. Finally, in a fourth step I shall consider whether embodied and enacted AI might clear this hurdle. My answer is a tentative Yes. To be sure, we need to distinguish between the subject or system level capacity to negotiate the world intelligently on the basis of (quasi-)perceptual input and linguistic communication from the ‘sub-personal’ information processing that causally enables these capacities. As regards the subject level, embodied AI can at least in principle come to tick all the relevant boxes. And as regards the sub-personal level, the ‘Bayesian brain hypothesis’ may be right in thinking that our capacities causally rest on processes that are at least formally analogous to those of ANNs.