Moral Responsibility, AI, and Complex CollectivesMaximilian Kiener
21 December 2023
Maximilian Kiener (TUHH), 17:15 - 18:45, presents their project in the Interdisciplinary Research Seminar of the GRK "Collective Decision-Making”.
Location: Room 0079, Von-Melle-Park 5
Abstract
The development and use of artificial intelligence (AI) involves numerous people and complex collectives, consisting of users, computer scientists, engineers, regulators, and more. This situation can lead to a so-called problem of ‘many hands’, where the complexity of collectives and groups, as well as the diffusion of agency, impede the attribution of responsibility. For this and other reasons, scholars argue that the use of AI will lead to responsibility gaps, i.e. situations in which no one is individually or collectively morally responsible for the harm caused by AI, because no one satisfies the conditions of moral responsibility. In this paper, I acknowledge that there is a significant challenge around responsibility and AI. Yet, I don’t think that this challenge is best captured in terms of a responsibility gap. Instead, I argue for the opposite view, namely that there is responsibility abundance, i.e. a situation in which numerous agents (including collectives) are responsible for the harm caused by AI, and that the challenge comes from the difficulties in dealing with such abundance in practice. I conclude by arguing that reframing the challenge in this way offers distinct dialectic, theoretical, and practical advantages, promising to help overcome some obstacles in the current debate surrounding ‘responsibility gaps’.
Find the abstract as PDF here.