Bridging the Gap Between AI and Reality • Rhodes, Greece
Time: Tuesday, 4.11
Room: Room A
Authors: Eva Schmidt, Sara Mann
Abstract: The concept of trustworthy AI is central to debates on artificial intelligence across politics, philosophy, computer science, and public discourse. Current perspectives on trustworthy AI can be grouped into four camps. First, “umbrellaists” use the term indiscriminately to denote broadly socially conforming AI systems (e.g., Herron et al. 2024, HLEG 2019, Spalazzese et al. 2025). Second, “sloganists” (e.g., tech companies) treat trustworthy AI as a vacuous marketing term for ethics washing or avoiding regulation (critique by Metzinger 2019). Third, “denialists” argue that the concept of trustworthiness cannot be appropriately applied to AI systems, since these systems fall short with respect to its character-based component (such as a benevolent attitude or moral integrity; Al 2023, Budnik 2025, Metzinger 2019, Ryan 2020). For similar reasons, “reductionists” claim that, in the context of AI, trustworthiness reduces to well-functioning or reliability (Baron 2025, Simion & Kelp 2023). Proposals belonging to these different camps coexist without clear criteria for determining which should take precedence - an issue we call “fragmentation”.